00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1894 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3160 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.122 Fetching changes from the remote Git repository 00:00:00.124 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.253 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.878 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.891 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.904 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.904 > git config core.sparsecheckout # timeout=10 00:00:06.916 > git read-tree -mu HEAD # timeout=10 00:00:06.933 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.952 Commit message: "pool: fixes for VisualBuild class" 00:00:06.952 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:07.040 [Pipeline] Start of Pipeline 00:00:07.055 [Pipeline] library 00:00:07.057 Loading library shm_lib@master 00:00:07.057 Library shm_lib@master is cached. Copying from home. 00:00:07.076 [Pipeline] node 00:00:07.085 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.086 [Pipeline] { 00:00:07.097 [Pipeline] catchError 00:00:07.099 [Pipeline] { 00:00:07.115 [Pipeline] wrap 00:00:07.124 [Pipeline] { 00:00:07.130 [Pipeline] stage 00:00:07.131 [Pipeline] { (Prologue) 00:00:07.330 [Pipeline] sh 00:00:07.616 + logger -p user.info -t JENKINS-CI 00:00:07.638 [Pipeline] echo 00:00:07.640 Node: CYP11 00:00:07.649 [Pipeline] sh 00:00:07.952 [Pipeline] setCustomBuildProperty 00:00:07.965 [Pipeline] echo 00:00:07.967 Cleanup processes 00:00:07.972 [Pipeline] sh 00:00:08.259 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.259 147711 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.274 [Pipeline] sh 00:00:08.562 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.562 ++ grep -v 'sudo pgrep' 00:00:08.562 ++ awk '{print $1}' 00:00:08.562 + sudo kill -9 00:00:08.562 + true 00:00:08.577 [Pipeline] cleanWs 00:00:08.588 [WS-CLEANUP] Deleting project workspace... 00:00:08.589 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.596 [WS-CLEANUP] done 00:00:08.601 [Pipeline] setCustomBuildProperty 00:00:08.618 [Pipeline] sh 00:00:08.906 + sudo git config --global --replace-all safe.directory '*' 00:00:08.981 [Pipeline] nodesByLabel 00:00:08.983 Found a total of 2 nodes with the 'sorcerer' label 00:00:08.995 [Pipeline] httpRequest 00:00:08.999 HttpMethod: GET 00:00:09.000 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.004 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.025 Response Code: HTTP/1.1 200 OK 00:00:09.026 Success: Status code 200 is in the accepted range: 200,404 00:00:09.026 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.506 [Pipeline] sh 00:00:11.786 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:11.804 [Pipeline] httpRequest 00:00:11.809 HttpMethod: GET 00:00:11.810 URL: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:11.810 Sending request to url: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:11.814 Response Code: HTTP/1.1 200 OK 00:00:11.814 Success: Status code 200 is in the accepted range: 200,404 00:00:11.815 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:26.828 [Pipeline] sh 00:00:27.116 + tar --no-same-owner -xf spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:29.675 [Pipeline] sh 00:00:29.962 + git -C spdk log --oneline -n5 00:00:29.962 e55c9a812 vbdev_error: decrement error_num atomically 00:00:29.962 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:00:29.962 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:00:29.962 f470a0dc6 event: do not call reactor events from spdk_thread context 00:00:29.962 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:00:29.985 [Pipeline] withCredentials 00:00:30.016 > git --version # timeout=10 00:00:30.029 > git --version # 'git version 2.39.2' 00:00:30.057 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.063 [Pipeline] { 00:00:30.085 [Pipeline] retry 00:00:30.089 [Pipeline] { 00:00:30.110 [Pipeline] sh 00:00:30.391 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:30.663 [Pipeline] } 00:00:30.684 [Pipeline] // retry 00:00:30.688 [Pipeline] } 00:00:30.708 [Pipeline] // withCredentials 00:00:30.718 [Pipeline] httpRequest 00:00:30.722 HttpMethod: GET 00:00:30.722 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.726 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:30.754 Response Code: HTTP/1.1 200 OK 00:00:30.755 Success: Status code 200 is in the accepted range: 200,404 00:00:30.755 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:54.909 [Pipeline] sh 00:00:55.196 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.122 [Pipeline] sh 00:00:57.404 + git -C dpdk log --oneline -n5 00:00:57.404 caf0f5d395 version: 22.11.4 00:00:57.404 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:57.404 dc9c799c7d vhost: fix missing spinlock unlock 00:00:57.404 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:57.404 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:57.416 [Pipeline] } 00:00:57.435 [Pipeline] // stage 00:00:57.446 [Pipeline] stage 00:00:57.448 [Pipeline] { (Prepare) 00:00:57.468 [Pipeline] writeFile 00:00:57.485 [Pipeline] sh 00:00:57.770 + logger -p user.info -t JENKINS-CI 00:00:57.783 [Pipeline] sh 00:00:58.070 + logger -p user.info -t JENKINS-CI 00:00:58.082 [Pipeline] sh 00:00:58.366 + cat autorun-spdk.conf 00:00:58.366 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.366 SPDK_TEST_NVMF=1 00:00:58.366 SPDK_TEST_NVME_CLI=1 00:00:58.366 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.366 SPDK_TEST_NVMF_NICS=e810 00:00:58.366 SPDK_TEST_VFIOUSER=1 00:00:58.366 SPDK_RUN_UBSAN=1 00:00:58.366 NET_TYPE=phy 00:00:58.366 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:58.366 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:58.375 RUN_NIGHTLY=1 00:00:58.380 [Pipeline] readFile 00:00:58.406 [Pipeline] withEnv 00:00:58.407 [Pipeline] { 00:00:58.422 [Pipeline] sh 00:00:58.709 + set -ex 00:00:58.710 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:58.710 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.710 ++ SPDK_TEST_NVMF=1 00:00:58.710 ++ SPDK_TEST_NVME_CLI=1 00:00:58.710 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.710 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.710 ++ SPDK_TEST_VFIOUSER=1 00:00:58.710 ++ SPDK_RUN_UBSAN=1 00:00:58.710 ++ NET_TYPE=phy 00:00:58.710 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:58.710 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:58.710 ++ RUN_NIGHTLY=1 00:00:58.710 + case $SPDK_TEST_NVMF_NICS in 00:00:58.710 + DRIVERS=ice 00:00:58.710 + [[ tcp == \r\d\m\a ]] 00:00:58.710 + [[ -n ice ]] 00:00:58.710 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:58.710 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.710 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:58.710 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.710 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.710 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.710 + true 00:00:58.710 + for D in $DRIVERS 00:00:58.710 + sudo modprobe ice 00:00:58.710 + exit 0 00:00:58.720 [Pipeline] } 00:00:58.741 [Pipeline] // withEnv 00:00:58.747 [Pipeline] } 00:00:58.771 [Pipeline] // stage 00:00:58.782 [Pipeline] catchError 00:00:58.784 [Pipeline] { 00:00:58.800 [Pipeline] timeout 00:00:58.801 Timeout set to expire in 50 min 00:00:58.803 [Pipeline] { 00:00:58.819 [Pipeline] stage 00:00:58.821 [Pipeline] { (Tests) 00:00:58.838 [Pipeline] sh 00:00:59.126 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.126 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.126 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.126 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:59.126 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.126 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.126 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:59.126 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.126 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:59.126 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:59.126 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:59.126 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:59.126 + source /etc/os-release 00:00:59.126 ++ NAME='Fedora Linux' 00:00:59.126 ++ VERSION='38 (Cloud Edition)' 00:00:59.126 ++ ID=fedora 00:00:59.126 ++ VERSION_ID=38 00:00:59.126 ++ VERSION_CODENAME= 00:00:59.126 ++ PLATFORM_ID=platform:f38 00:00:59.126 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:59.126 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:59.126 ++ LOGO=fedora-logo-icon 00:00:59.126 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:59.126 ++ HOME_URL=https://fedoraproject.org/ 00:00:59.126 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:59.126 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:59.126 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:59.126 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:59.126 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:59.126 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:59.126 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:59.126 ++ SUPPORT_END=2024-05-14 00:00:59.126 ++ VARIANT='Cloud Edition' 00:00:59.126 ++ VARIANT_ID=cloud 00:00:59.126 + uname -a 00:00:59.126 Linux spdk-cyp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:59.126 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:02.442 Hugepages 00:01:02.442 node hugesize free / total 00:01:02.442 node0 1048576kB 0 / 0 00:01:02.442 node0 2048kB 0 / 0 00:01:02.443 node1 1048576kB 0 / 0 00:01:02.443 node1 2048kB 0 / 0 00:01:02.443 00:01:02.443 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.443 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:02.443 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:02.443 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:02.443 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:02.443 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:02.443 + rm -f /tmp/spdk-ld-path 00:01:02.443 + source autorun-spdk.conf 00:01:02.443 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.443 ++ SPDK_TEST_NVMF=1 00:01:02.443 ++ SPDK_TEST_NVME_CLI=1 00:01:02.443 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.443 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.443 ++ SPDK_TEST_VFIOUSER=1 00:01:02.443 ++ SPDK_RUN_UBSAN=1 00:01:02.443 ++ NET_TYPE=phy 00:01:02.443 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:02.443 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.443 ++ RUN_NIGHTLY=1 00:01:02.443 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.443 + [[ -n '' ]] 00:01:02.443 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.704 + for M in /var/spdk/build-*-manifest.txt 00:01:02.704 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.704 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.704 + for M in /var/spdk/build-*-manifest.txt 00:01:02.704 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.704 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:02.704 ++ uname 00:01:02.704 + [[ Linux == \L\i\n\u\x ]] 00:01:02.704 + sudo dmesg -T 00:01:02.704 + sudo dmesg --clear 00:01:02.704 + dmesg_pid=149387 00:01:02.704 + [[ Fedora Linux == FreeBSD ]] 00:01:02.704 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.704 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.704 + sudo dmesg -Tw 00:01:02.704 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.704 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.704 + export FIO_BIN=/usr/src/fio-static/fio 00:01:02.704 + FIO_BIN=/usr/src/fio-static/fio 00:01:02.704 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.704 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.704 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.704 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.704 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:02.704 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.704 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.704 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:02.704 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.704 Test configuration: 00:01:02.704 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.704 SPDK_TEST_NVMF=1 00:01:02.704 SPDK_TEST_NVME_CLI=1 00:01:02.704 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.704 SPDK_TEST_NVMF_NICS=e810 00:01:02.704 SPDK_TEST_VFIOUSER=1 00:01:02.704 SPDK_RUN_UBSAN=1 00:01:02.704 NET_TYPE=phy 00:01:02.704 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:02.704 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.704 RUN_NIGHTLY=1 14:03:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:02.704 14:03:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.704 14:03:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.704 14:03:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.704 14:03:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.704 14:03:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.704 14:03:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.704 14:03:26 -- paths/export.sh@5 -- $ export PATH 00:01:02.704 14:03:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:02.704 14:03:26 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:02.704 14:03:26 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:02.704 14:03:26 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717761806.XXXXXX 00:01:02.704 14:03:26 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717761806.verHpG 00:01:02.704 14:03:26 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:02.704 14:03:26 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:02.704 14:03:26 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.704 14:03:26 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:02.704 14:03:26 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:02.705 14:03:26 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.705 14:03:26 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:02.705 14:03:26 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:02.705 14:03:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.705 14:03:26 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:02.705 14:03:26 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:02.705 14:03:26 -- pm/common@17 -- $ local monitor 00:01:02.705 14:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.705 14:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.705 14:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.705 14:03:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:02.705 14:03:26 -- pm/common@21 -- $ date +%s 00:01:02.705 14:03:26 -- pm/common@21 -- $ date +%s 00:01:02.705 14:03:26 -- pm/common@25 -- $ sleep 1 00:01:02.705 14:03:26 -- pm/common@21 -- $ date +%s 00:01:02.705 14:03:26 -- pm/common@21 -- $ date +%s 00:01:02.705 14:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717761806 00:01:02.705 14:03:26 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717761806 00:01:02.705 14:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717761806 00:01:02.705 14:03:26 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717761806 00:01:02.966 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717761806_collect-vmstat.pm.log 00:01:02.966 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717761806_collect-cpu-load.pm.log 00:01:02.966 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717761806_collect-cpu-temp.pm.log 00:01:02.966 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717761806_collect-bmc-pm.bmc.pm.log 00:01:03.908 14:03:27 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:03.908 14:03:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:03.908 14:03:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:03.908 14:03:27 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.908 14:03:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:03.908 Fri Jun 7 12:03:27 PM UTC 2024 00:01:03.908 14:03:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:03.908 v24.09-pre-53-ge55c9a812 00:01:03.908 14:03:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:03.908 14:03:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:03.908 14:03:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:03.908 14:03:27 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:03.908 14:03:27 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:03.908 14:03:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.908 ************************************ 00:01:03.908 START TEST ubsan 00:01:03.908 ************************************ 00:01:03.908 14:03:27 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:01:03.908 using ubsan 00:01:03.908 00:01:03.908 real 0m0.000s 00:01:03.908 user 0m0.000s 00:01:03.908 sys 0m0.000s 00:01:03.908 14:03:27 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:03.908 14:03:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:03.908 ************************************ 00:01:03.908 END TEST ubsan 00:01:03.908 ************************************ 00:01:03.908 14:03:27 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:03.908 14:03:27 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:03.909 14:03:27 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:03.909 14:03:27 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:01:03.909 14:03:27 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:03.909 14:03:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.909 ************************************ 00:01:03.909 START TEST build_native_dpdk 00:01:03.909 ************************************ 00:01:03.909 14:03:27 build_native_dpdk -- common/autotest_common.sh@1124 -- $ _build_native_dpdk 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:03.909 caf0f5d395 version: 22.11.4 00:01:03.909 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:03.909 dc9c799c7d vhost: fix missing spinlock unlock 00:01:03.909 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:03.909 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:03.909 14:03:27 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:03.909 patching file config/rte_config.h 00:01:03.909 Hunk #1 succeeded at 60 (offset 1 line). 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:03.909 14:03:27 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:09.196 The Meson build system 00:01:09.196 Version: 1.3.1 00:01:09.196 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:09.196 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:09.196 Build type: native build 00:01:09.196 Program cat found: YES (/usr/bin/cat) 00:01:09.196 Project name: DPDK 00:01:09.196 Project version: 22.11.4 00:01:09.196 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:09.196 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:09.196 Host machine cpu family: x86_64 00:01:09.196 Host machine cpu: x86_64 00:01:09.196 Message: ## Building in Developer Mode ## 00:01:09.196 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:09.196 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:09.196 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:09.196 Program objdump found: YES (/usr/bin/objdump) 00:01:09.196 Program python3 found: YES (/usr/bin/python3) 00:01:09.196 Program cat found: YES (/usr/bin/cat) 00:01:09.196 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:09.196 Checking for size of "void *" : 8 00:01:09.196 Checking for size of "void *" : 8 (cached) 00:01:09.196 Library m found: YES 00:01:09.196 Library numa found: YES 00:01:09.196 Has header "numaif.h" : YES 00:01:09.197 Library fdt found: NO 00:01:09.197 Library execinfo found: NO 00:01:09.197 Has header "execinfo.h" : YES 00:01:09.197 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:09.197 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:09.197 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:09.197 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:09.197 Run-time dependency openssl found: YES 3.0.9 00:01:09.197 Run-time dependency libpcap found: YES 1.10.4 00:01:09.197 Has header "pcap.h" with dependency libpcap: YES 00:01:09.197 Compiler for C supports arguments -Wcast-qual: YES 00:01:09.197 Compiler for C supports arguments -Wdeprecated: YES 00:01:09.197 Compiler for C supports arguments -Wformat: YES 00:01:09.197 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:09.197 Compiler for C supports arguments -Wformat-security: NO 00:01:09.197 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:09.197 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:09.197 Compiler for C supports arguments -Wnested-externs: YES 00:01:09.197 Compiler for C supports arguments -Wold-style-definition: YES 00:01:09.197 Compiler for C supports arguments -Wpointer-arith: YES 00:01:09.197 Compiler for C supports arguments -Wsign-compare: YES 00:01:09.197 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:09.197 Compiler for C supports arguments -Wundef: YES 00:01:09.197 Compiler for C supports arguments -Wwrite-strings: YES 00:01:09.197 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:09.197 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:09.197 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:09.197 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:09.197 Compiler for C supports arguments -mavx512f: YES 00:01:09.197 Checking if "AVX512 checking" compiles: YES 00:01:09.197 Fetching value of define "__SSE4_2__" : 1 00:01:09.197 Fetching value of define "__AES__" : 1 00:01:09.197 Fetching value of define "__AVX__" : 1 00:01:09.197 Fetching value of define "__AVX2__" : 1 00:01:09.197 Fetching value of define "__AVX512BW__" : 1 00:01:09.197 Fetching value of define "__AVX512CD__" : 1 00:01:09.197 Fetching value of define "__AVX512DQ__" : 1 00:01:09.197 Fetching value of define "__AVX512F__" : 1 00:01:09.197 Fetching value of define "__AVX512VL__" : 1 00:01:09.197 Fetching value of define "__PCLMUL__" : 1 00:01:09.197 Fetching value of define "__RDRND__" : 1 00:01:09.197 Fetching value of define "__RDSEED__" : 1 00:01:09.197 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:09.197 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:09.197 Message: lib/kvargs: Defining dependency "kvargs" 00:01:09.197 Message: lib/telemetry: Defining dependency "telemetry" 00:01:09.197 Checking for function "getentropy" : YES 00:01:09.197 Message: lib/eal: Defining dependency "eal" 00:01:09.197 Message: lib/ring: Defining dependency "ring" 00:01:09.197 Message: lib/rcu: Defining dependency "rcu" 00:01:09.197 Message: lib/mempool: Defining dependency "mempool" 00:01:09.197 Message: lib/mbuf: Defining dependency "mbuf" 00:01:09.197 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:09.197 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:09.197 Compiler for C supports arguments -mpclmul: YES 00:01:09.197 Compiler for C supports arguments -maes: YES 00:01:09.197 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:09.197 Compiler for C supports arguments -mavx512bw: YES 00:01:09.197 Compiler for C supports arguments -mavx512dq: YES 00:01:09.197 Compiler for C supports arguments -mavx512vl: YES 00:01:09.197 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:09.197 Compiler for C supports arguments -mavx2: YES 00:01:09.197 Compiler for C supports arguments -mavx: YES 00:01:09.197 Message: lib/net: Defining dependency "net" 00:01:09.197 Message: lib/meter: Defining dependency "meter" 00:01:09.197 Message: lib/ethdev: Defining dependency "ethdev" 00:01:09.197 Message: lib/pci: Defining dependency "pci" 00:01:09.197 Message: lib/cmdline: Defining dependency "cmdline" 00:01:09.197 Message: lib/metrics: Defining dependency "metrics" 00:01:09.197 Message: lib/hash: Defining dependency "hash" 00:01:09.197 Message: lib/timer: Defining dependency "timer" 00:01:09.197 Fetching value of define "__AVX2__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:09.197 Message: lib/acl: Defining dependency "acl" 00:01:09.197 Message: lib/bbdev: Defining dependency "bbdev" 00:01:09.197 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:09.197 Run-time dependency libelf found: YES 0.190 00:01:09.197 Message: lib/bpf: Defining dependency "bpf" 00:01:09.197 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:09.197 Message: lib/compressdev: Defining dependency "compressdev" 00:01:09.197 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:09.197 Message: lib/distributor: Defining dependency "distributor" 00:01:09.197 Message: lib/efd: Defining dependency "efd" 00:01:09.197 Message: lib/eventdev: Defining dependency "eventdev" 00:01:09.197 Message: lib/gpudev: Defining dependency "gpudev" 00:01:09.197 Message: lib/gro: Defining dependency "gro" 00:01:09.197 Message: lib/gso: Defining dependency "gso" 00:01:09.197 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:09.197 Message: lib/jobstats: Defining dependency "jobstats" 00:01:09.197 Message: lib/latencystats: Defining dependency "latencystats" 00:01:09.197 Message: lib/lpm: Defining dependency "lpm" 00:01:09.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512IFMA__" : 1 00:01:09.197 Message: lib/member: Defining dependency "member" 00:01:09.197 Message: lib/pcapng: Defining dependency "pcapng" 00:01:09.197 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:09.197 Message: lib/power: Defining dependency "power" 00:01:09.197 Message: lib/rawdev: Defining dependency "rawdev" 00:01:09.197 Message: lib/regexdev: Defining dependency "regexdev" 00:01:09.197 Message: lib/dmadev: Defining dependency "dmadev" 00:01:09.197 Message: lib/rib: Defining dependency "rib" 00:01:09.197 Message: lib/reorder: Defining dependency "reorder" 00:01:09.197 Message: lib/sched: Defining dependency "sched" 00:01:09.197 Message: lib/security: Defining dependency "security" 00:01:09.197 Message: lib/stack: Defining dependency "stack" 00:01:09.197 Has header "linux/userfaultfd.h" : YES 00:01:09.197 Message: lib/vhost: Defining dependency "vhost" 00:01:09.197 Message: lib/ipsec: Defining dependency "ipsec" 00:01:09.197 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:09.197 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:09.197 Message: lib/fib: Defining dependency "fib" 00:01:09.197 Message: lib/port: Defining dependency "port" 00:01:09.197 Message: lib/pdump: Defining dependency "pdump" 00:01:09.197 Message: lib/table: Defining dependency "table" 00:01:09.197 Message: lib/pipeline: Defining dependency "pipeline" 00:01:09.197 Message: lib/graph: Defining dependency "graph" 00:01:09.197 Message: lib/node: Defining dependency "node" 00:01:09.197 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:09.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:09.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:09.197 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:09.197 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:09.197 Compiler for C supports arguments -Wno-unused-value: YES 00:01:09.197 Compiler for C supports arguments -Wno-format: YES 00:01:09.197 Compiler for C supports arguments -Wno-format-security: YES 00:01:09.197 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:09.197 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:10.136 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:10.136 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:10.136 Fetching value of define "__AVX2__" : 1 (cached) 00:01:10.136 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:10.137 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:10.137 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:10.137 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:10.137 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:10.137 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:10.137 Program doxygen found: YES (/usr/bin/doxygen) 00:01:10.137 Configuring doxy-api.conf using configuration 00:01:10.137 Program sphinx-build found: NO 00:01:10.137 Configuring rte_build_config.h using configuration 00:01:10.137 Message: 00:01:10.137 ================= 00:01:10.137 Applications Enabled 00:01:10.137 ================= 00:01:10.137 00:01:10.137 apps: 00:01:10.137 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:10.137 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:10.137 test-security-perf, 00:01:10.137 00:01:10.137 Message: 00:01:10.137 ================= 00:01:10.137 Libraries Enabled 00:01:10.137 ================= 00:01:10.137 00:01:10.137 libs: 00:01:10.137 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:10.137 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:10.137 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:10.137 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:10.137 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:10.137 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:10.137 table, pipeline, graph, node, 00:01:10.137 00:01:10.137 Message: 00:01:10.137 =============== 00:01:10.137 Drivers Enabled 00:01:10.137 =============== 00:01:10.137 00:01:10.137 common: 00:01:10.137 00:01:10.137 bus: 00:01:10.137 pci, vdev, 00:01:10.137 mempool: 00:01:10.137 ring, 00:01:10.137 dma: 00:01:10.137 00:01:10.137 net: 00:01:10.137 i40e, 00:01:10.137 raw: 00:01:10.137 00:01:10.137 crypto: 00:01:10.137 00:01:10.137 compress: 00:01:10.137 00:01:10.137 regex: 00:01:10.137 00:01:10.137 vdpa: 00:01:10.137 00:01:10.137 event: 00:01:10.137 00:01:10.137 baseband: 00:01:10.137 00:01:10.137 gpu: 00:01:10.137 00:01:10.137 00:01:10.137 Message: 00:01:10.137 ================= 00:01:10.137 Content Skipped 00:01:10.137 ================= 00:01:10.137 00:01:10.137 apps: 00:01:10.137 00:01:10.137 libs: 00:01:10.137 kni: explicitly disabled via build config (deprecated lib) 00:01:10.137 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:10.137 00:01:10.137 drivers: 00:01:10.137 common/cpt: not in enabled drivers build config 00:01:10.137 common/dpaax: not in enabled drivers build config 00:01:10.137 common/iavf: not in enabled drivers build config 00:01:10.137 common/idpf: not in enabled drivers build config 00:01:10.137 common/mvep: not in enabled drivers build config 00:01:10.137 common/octeontx: not in enabled drivers build config 00:01:10.137 bus/auxiliary: not in enabled drivers build config 00:01:10.137 bus/dpaa: not in enabled drivers build config 00:01:10.137 bus/fslmc: not in enabled drivers build config 00:01:10.137 bus/ifpga: not in enabled drivers build config 00:01:10.137 bus/vmbus: not in enabled drivers build config 00:01:10.137 common/cnxk: not in enabled drivers build config 00:01:10.137 common/mlx5: not in enabled drivers build config 00:01:10.137 common/qat: not in enabled drivers build config 00:01:10.137 common/sfc_efx: not in enabled drivers build config 00:01:10.137 mempool/bucket: not in enabled drivers build config 00:01:10.137 mempool/cnxk: not in enabled drivers build config 00:01:10.137 mempool/dpaa: not in enabled drivers build config 00:01:10.137 mempool/dpaa2: not in enabled drivers build config 00:01:10.137 mempool/octeontx: not in enabled drivers build config 00:01:10.137 mempool/stack: not in enabled drivers build config 00:01:10.137 dma/cnxk: not in enabled drivers build config 00:01:10.137 dma/dpaa: not in enabled drivers build config 00:01:10.137 dma/dpaa2: not in enabled drivers build config 00:01:10.137 dma/hisilicon: not in enabled drivers build config 00:01:10.137 dma/idxd: not in enabled drivers build config 00:01:10.137 dma/ioat: not in enabled drivers build config 00:01:10.137 dma/skeleton: not in enabled drivers build config 00:01:10.137 net/af_packet: not in enabled drivers build config 00:01:10.137 net/af_xdp: not in enabled drivers build config 00:01:10.137 net/ark: not in enabled drivers build config 00:01:10.137 net/atlantic: not in enabled drivers build config 00:01:10.137 net/avp: not in enabled drivers build config 00:01:10.137 net/axgbe: not in enabled drivers build config 00:01:10.137 net/bnx2x: not in enabled drivers build config 00:01:10.137 net/bnxt: not in enabled drivers build config 00:01:10.137 net/bonding: not in enabled drivers build config 00:01:10.137 net/cnxk: not in enabled drivers build config 00:01:10.137 net/cxgbe: not in enabled drivers build config 00:01:10.137 net/dpaa: not in enabled drivers build config 00:01:10.137 net/dpaa2: not in enabled drivers build config 00:01:10.137 net/e1000: not in enabled drivers build config 00:01:10.137 net/ena: not in enabled drivers build config 00:01:10.137 net/enetc: not in enabled drivers build config 00:01:10.137 net/enetfec: not in enabled drivers build config 00:01:10.137 net/enic: not in enabled drivers build config 00:01:10.137 net/failsafe: not in enabled drivers build config 00:01:10.137 net/fm10k: not in enabled drivers build config 00:01:10.137 net/gve: not in enabled drivers build config 00:01:10.137 net/hinic: not in enabled drivers build config 00:01:10.137 net/hns3: not in enabled drivers build config 00:01:10.137 net/iavf: not in enabled drivers build config 00:01:10.137 net/ice: not in enabled drivers build config 00:01:10.137 net/idpf: not in enabled drivers build config 00:01:10.137 net/igc: not in enabled drivers build config 00:01:10.137 net/ionic: not in enabled drivers build config 00:01:10.137 net/ipn3ke: not in enabled drivers build config 00:01:10.137 net/ixgbe: not in enabled drivers build config 00:01:10.137 net/kni: not in enabled drivers build config 00:01:10.137 net/liquidio: not in enabled drivers build config 00:01:10.137 net/mana: not in enabled drivers build config 00:01:10.137 net/memif: not in enabled drivers build config 00:01:10.137 net/mlx4: not in enabled drivers build config 00:01:10.137 net/mlx5: not in enabled drivers build config 00:01:10.137 net/mvneta: not in enabled drivers build config 00:01:10.137 net/mvpp2: not in enabled drivers build config 00:01:10.137 net/netvsc: not in enabled drivers build config 00:01:10.137 net/nfb: not in enabled drivers build config 00:01:10.137 net/nfp: not in enabled drivers build config 00:01:10.137 net/ngbe: not in enabled drivers build config 00:01:10.137 net/null: not in enabled drivers build config 00:01:10.137 net/octeontx: not in enabled drivers build config 00:01:10.137 net/octeon_ep: not in enabled drivers build config 00:01:10.137 net/pcap: not in enabled drivers build config 00:01:10.137 net/pfe: not in enabled drivers build config 00:01:10.137 net/qede: not in enabled drivers build config 00:01:10.137 net/ring: not in enabled drivers build config 00:01:10.137 net/sfc: not in enabled drivers build config 00:01:10.137 net/softnic: not in enabled drivers build config 00:01:10.137 net/tap: not in enabled drivers build config 00:01:10.137 net/thunderx: not in enabled drivers build config 00:01:10.137 net/txgbe: not in enabled drivers build config 00:01:10.137 net/vdev_netvsc: not in enabled drivers build config 00:01:10.137 net/vhost: not in enabled drivers build config 00:01:10.137 net/virtio: not in enabled drivers build config 00:01:10.137 net/vmxnet3: not in enabled drivers build config 00:01:10.137 raw/cnxk_bphy: not in enabled drivers build config 00:01:10.137 raw/cnxk_gpio: not in enabled drivers build config 00:01:10.137 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:10.137 raw/ifpga: not in enabled drivers build config 00:01:10.137 raw/ntb: not in enabled drivers build config 00:01:10.137 raw/skeleton: not in enabled drivers build config 00:01:10.137 crypto/armv8: not in enabled drivers build config 00:01:10.137 crypto/bcmfs: not in enabled drivers build config 00:01:10.137 crypto/caam_jr: not in enabled drivers build config 00:01:10.137 crypto/ccp: not in enabled drivers build config 00:01:10.137 crypto/cnxk: not in enabled drivers build config 00:01:10.137 crypto/dpaa_sec: not in enabled drivers build config 00:01:10.137 crypto/dpaa2_sec: not in enabled drivers build config 00:01:10.137 crypto/ipsec_mb: not in enabled drivers build config 00:01:10.137 crypto/mlx5: not in enabled drivers build config 00:01:10.137 crypto/mvsam: not in enabled drivers build config 00:01:10.137 crypto/nitrox: not in enabled drivers build config 00:01:10.137 crypto/null: not in enabled drivers build config 00:01:10.137 crypto/octeontx: not in enabled drivers build config 00:01:10.137 crypto/openssl: not in enabled drivers build config 00:01:10.137 crypto/scheduler: not in enabled drivers build config 00:01:10.137 crypto/uadk: not in enabled drivers build config 00:01:10.137 crypto/virtio: not in enabled drivers build config 00:01:10.137 compress/isal: not in enabled drivers build config 00:01:10.137 compress/mlx5: not in enabled drivers build config 00:01:10.137 compress/octeontx: not in enabled drivers build config 00:01:10.137 compress/zlib: not in enabled drivers build config 00:01:10.137 regex/mlx5: not in enabled drivers build config 00:01:10.137 regex/cn9k: not in enabled drivers build config 00:01:10.137 vdpa/ifc: not in enabled drivers build config 00:01:10.137 vdpa/mlx5: not in enabled drivers build config 00:01:10.137 vdpa/sfc: not in enabled drivers build config 00:01:10.137 event/cnxk: not in enabled drivers build config 00:01:10.137 event/dlb2: not in enabled drivers build config 00:01:10.137 event/dpaa: not in enabled drivers build config 00:01:10.137 event/dpaa2: not in enabled drivers build config 00:01:10.137 event/dsw: not in enabled drivers build config 00:01:10.138 event/opdl: not in enabled drivers build config 00:01:10.138 event/skeleton: not in enabled drivers build config 00:01:10.138 event/sw: not in enabled drivers build config 00:01:10.138 event/octeontx: not in enabled drivers build config 00:01:10.138 baseband/acc: not in enabled drivers build config 00:01:10.138 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:10.138 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:10.138 baseband/la12xx: not in enabled drivers build config 00:01:10.138 baseband/null: not in enabled drivers build config 00:01:10.138 baseband/turbo_sw: not in enabled drivers build config 00:01:10.138 gpu/cuda: not in enabled drivers build config 00:01:10.138 00:01:10.138 00:01:10.138 Build targets in project: 309 00:01:10.138 00:01:10.138 DPDK 22.11.4 00:01:10.138 00:01:10.138 User defined options 00:01:10.138 libdir : lib 00:01:10.138 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:10.138 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:10.138 c_link_args : 00:01:10.138 enable_docs : false 00:01:10.138 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:10.138 enable_kmods : false 00:01:10.138 machine : native 00:01:10.138 tests : false 00:01:10.138 00:01:10.138 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:10.138 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:10.138 14:03:33 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:01:10.138 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:10.396 [1/738] Generating lib/rte_kvargs_def with a custom command 00:01:10.396 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:01:10.396 [3/738] Generating lib/rte_telemetry_def with a custom command 00:01:10.396 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:01:10.396 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:10.396 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:10.396 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:10.396 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:10.396 [9/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:10.396 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:10.396 [11/738] Generating lib/rte_ring_def with a custom command 00:01:10.396 [12/738] Generating lib/rte_ring_mingw with a custom command 00:01:10.396 [13/738] Generating lib/rte_rcu_mingw with a custom command 00:01:10.396 [14/738] Generating lib/rte_mbuf_def with a custom command 00:01:10.396 [15/738] Generating lib/rte_net_mingw with a custom command 00:01:10.396 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:10.396 [17/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:10.396 [18/738] Generating lib/rte_net_def with a custom command 00:01:10.396 [19/738] Generating lib/rte_meter_mingw with a custom command 00:01:10.396 [20/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:10.396 [21/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:10.396 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:10.396 [23/738] Generating lib/rte_rcu_def with a custom command 00:01:10.396 [24/738] Generating lib/rte_pci_def with a custom command 00:01:10.396 [25/738] Generating lib/rte_eal_def with a custom command 00:01:10.396 [26/738] Generating lib/rte_pci_mingw with a custom command 00:01:10.396 [27/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:10.396 [28/738] Generating lib/rte_cmdline_def with a custom command 00:01:10.396 [29/738] Generating lib/rte_eal_mingw with a custom command 00:01:10.396 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:10.396 [31/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:10.657 [32/738] Generating lib/rte_metrics_mingw with a custom command 00:01:10.657 [33/738] Generating lib/rte_hash_mingw with a custom command 00:01:10.657 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:10.657 [35/738] Generating lib/rte_mempool_def with a custom command 00:01:10.657 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:10.657 [37/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:10.657 [38/738] Generating lib/rte_acl_def with a custom command 00:01:10.657 [39/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:10.657 [40/738] Generating lib/rte_mbuf_mingw with a custom command 00:01:10.657 [41/738] Generating lib/rte_acl_mingw with a custom command 00:01:10.657 [42/738] Generating lib/rte_ethdev_mingw with a custom command 00:01:10.657 [43/738] Generating lib/rte_hash_def with a custom command 00:01:10.657 [44/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:10.657 [45/738] Generating lib/rte_timer_mingw with a custom command 00:01:10.657 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:10.657 [47/738] Generating lib/rte_bitratestats_def with a custom command 00:01:10.657 [48/738] Generating lib/rte_mempool_mingw with a custom command 00:01:10.657 [49/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:10.657 [50/738] Generating lib/rte_meter_def with a custom command 00:01:10.657 [51/738] Generating lib/rte_metrics_def with a custom command 00:01:10.657 [52/738] Generating lib/rte_bpf_mingw with a custom command 00:01:10.657 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:10.657 [54/738] Generating lib/rte_bbdev_def with a custom command 00:01:10.657 [55/738] Generating lib/rte_bpf_def with a custom command 00:01:10.657 [56/738] Generating lib/rte_cfgfile_def with a custom command 00:01:10.657 [57/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:10.657 [58/738] Generating lib/rte_cmdline_mingw with a custom command 00:01:10.657 [59/738] Generating lib/rte_compressdev_def with a custom command 00:01:10.657 [60/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:10.657 [61/738] Generating lib/rte_compressdev_mingw with a custom command 00:01:10.657 [62/738] Generating lib/rte_ethdev_def with a custom command 00:01:10.657 [63/738] Generating lib/rte_cryptodev_def with a custom command 00:01:10.657 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:10.657 [65/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:10.657 [66/738] Generating lib/rte_timer_def with a custom command 00:01:10.657 [67/738] Generating lib/rte_bbdev_mingw with a custom command 00:01:10.657 [68/738] Generating lib/rte_cryptodev_mingw with a custom command 00:01:10.657 [69/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:10.657 [70/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:10.657 [71/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:10.657 [72/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:10.657 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:10.657 [74/738] Generating lib/rte_distributor_mingw with a custom command 00:01:10.657 [75/738] Linking static target lib/librte_pci.a 00:01:10.657 [76/738] Generating lib/rte_efd_def with a custom command 00:01:10.657 [77/738] Generating lib/rte_distributor_def with a custom command 00:01:10.657 [78/738] Generating lib/rte_cfgfile_mingw with a custom command 00:01:10.657 [79/738] Generating lib/rte_bitratestats_mingw with a custom command 00:01:10.657 [80/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:10.657 [81/738] Linking static target lib/librte_ring.a 00:01:10.657 [82/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:10.657 [83/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:10.657 [84/738] Generating lib/rte_eventdev_def with a custom command 00:01:10.657 [85/738] Generating lib/rte_eventdev_mingw with a custom command 00:01:10.657 [86/738] Generating lib/rte_gpudev_def with a custom command 00:01:10.657 [87/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:10.657 [88/738] Generating lib/rte_gpudev_mingw with a custom command 00:01:10.657 [89/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:10.657 [90/738] Linking static target lib/librte_kvargs.a 00:01:10.657 [91/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:10.657 [92/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:10.657 [93/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:10.657 [94/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:10.657 [95/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:10.657 [96/738] Generating lib/rte_gro_def with a custom command 00:01:10.657 [97/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:10.657 [98/738] Generating lib/rte_gro_mingw with a custom command 00:01:10.657 [99/738] Linking static target lib/librte_meter.a 00:01:10.657 [100/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:10.657 [101/738] Generating lib/rte_efd_mingw with a custom command 00:01:10.657 [102/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:10.657 [103/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:10.657 [104/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:10.657 [105/738] Generating lib/rte_gso_def with a custom command 00:01:10.657 [106/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:10.921 [107/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:10.921 [108/738] Generating lib/rte_gso_mingw with a custom command 00:01:10.921 [109/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:10.921 [110/738] Generating lib/rte_ip_frag_mingw with a custom command 00:01:10.921 [111/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:10.921 [112/738] Generating lib/rte_jobstats_mingw with a custom command 00:01:10.921 [113/738] Generating lib/rte_latencystats_def with a custom command 00:01:10.921 [114/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:10.921 [115/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:10.921 [116/738] Generating lib/rte_jobstats_def with a custom command 00:01:10.921 [117/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:10.921 [118/738] Generating lib/rte_latencystats_mingw with a custom command 00:01:10.921 [119/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:10.921 [120/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:10.921 [121/738] Generating lib/rte_ip_frag_def with a custom command 00:01:10.921 [122/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:10.921 [123/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:10.921 [124/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:10.921 [125/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:10.921 [126/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:10.921 [127/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:10.921 [128/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:10.921 [129/738] Generating lib/rte_lpm_def with a custom command 00:01:10.921 [130/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:10.921 [131/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:10.921 [132/738] Generating lib/rte_member_def with a custom command 00:01:10.921 [133/738] Linking static target lib/librte_cfgfile.a 00:01:10.921 [134/738] Generating lib/rte_lpm_mingw with a custom command 00:01:10.921 [135/738] Generating lib/rte_member_mingw with a custom command 00:01:10.921 [136/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:10.921 [137/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:10.921 [138/738] Generating lib/rte_pcapng_mingw with a custom command 00:01:10.921 [139/738] Generating lib/rte_pcapng_def with a custom command 00:01:10.921 [140/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:10.921 [141/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:10.921 [142/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:10.921 [143/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:10.921 [144/738] Generating lib/rte_power_def with a custom command 00:01:10.921 [145/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:10.921 [146/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:10.921 [147/738] Generating lib/rte_rawdev_mingw with a custom command 00:01:10.921 [148/738] Generating lib/rte_power_mingw with a custom command 00:01:10.921 [149/738] Generating lib/rte_rawdev_def with a custom command 00:01:11.184 [150/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:11.184 [151/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:11.184 [152/738] Generating lib/rte_dmadev_mingw with a custom command 00:01:11.184 [153/738] Generating lib/rte_regexdev_def with a custom command 00:01:11.184 [154/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:11.184 [155/738] Generating lib/rte_regexdev_mingw with a custom command 00:01:11.184 [156/738] Generating lib/rte_dmadev_def with a custom command 00:01:11.184 [157/738] Generating lib/rte_rib_def with a custom command 00:01:11.184 [158/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:11.184 [159/738] Generating lib/rte_rib_mingw with a custom command 00:01:11.184 [160/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:11.184 [161/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:11.184 [162/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:11.184 [163/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:11.184 [164/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.184 [165/738] Generating lib/rte_sched_def with a custom command 00:01:11.184 [166/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.184 [167/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:11.184 [168/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.184 [169/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.184 [170/738] Generating lib/rte_reorder_def with a custom command 00:01:11.184 [171/738] Generating lib/rte_reorder_mingw with a custom command 00:01:11.184 [172/738] Generating lib/rte_sched_mingw with a custom command 00:01:11.184 [173/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:11.184 [174/738] Generating lib/rte_security_def with a custom command 00:01:11.184 [175/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:11.184 [176/738] Generating lib/rte_security_mingw with a custom command 00:01:11.184 [177/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.184 [178/738] Linking static target lib/librte_jobstats.a 00:01:11.184 [179/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:11.184 [180/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:11.184 [181/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.184 [182/738] Generating lib/rte_stack_mingw with a custom command 00:01:11.184 [183/738] Generating lib/rte_stack_def with a custom command 00:01:11.184 [184/738] Linking static target lib/librte_telemetry.a 00:01:11.184 [185/738] Generating lib/rte_vhost_def with a custom command 00:01:11.184 [186/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:11.184 [187/738] Linking target lib/librte_kvargs.so.23.0 00:01:11.184 [188/738] Generating lib/rte_vhost_mingw with a custom command 00:01:11.184 [189/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:11.184 [190/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:11.184 [191/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:11.184 [192/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:11.184 [193/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:11.184 [194/738] Linking static target lib/librte_timer.a 00:01:11.184 [195/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:11.184 [196/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.184 [197/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:11.184 [198/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:11.184 [199/738] Linking static target lib/librte_metrics.a 00:01:11.184 [200/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:11.184 [201/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:11.184 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:11.184 [203/738] Generating lib/rte_ipsec_mingw with a custom command 00:01:11.184 [204/738] Generating lib/rte_ipsec_def with a custom command 00:01:11.184 [205/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:11.184 [206/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:11.184 [207/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:11.184 [208/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:11.184 [209/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:11.184 [210/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:11.184 [211/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:11.184 [212/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:11.184 [213/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:11.184 [214/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:11.184 [215/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:11.184 [216/738] Generating lib/rte_fib_mingw with a custom command 00:01:11.184 [217/738] Generating lib/rte_fib_def with a custom command 00:01:11.184 [218/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:11.184 [219/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:11.184 [220/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:11.442 [221/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:11.442 [222/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:11.442 [223/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:11.442 [224/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:11.442 [225/738] Linking static target lib/librte_stack.a 00:01:11.442 [226/738] Generating lib/rte_port_def with a custom command 00:01:11.442 [227/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:11.442 [228/738] Generating lib/rte_port_mingw with a custom command 00:01:11.442 [229/738] Generating lib/rte_pdump_mingw with a custom command 00:01:11.442 [230/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:11.442 [231/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:11.442 [232/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:11.442 [233/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:11.442 [234/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:11.442 [235/738] Generating lib/rte_pdump_def with a custom command 00:01:11.442 [236/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:11.442 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:11.442 [238/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:11.442 [239/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:11.442 [240/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:11.442 [241/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:11.442 [242/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.442 [243/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:11.442 [244/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:11.442 [245/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:11.442 [246/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:11.442 [247/738] Generating lib/rte_table_mingw with a custom command 00:01:11.442 [248/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:11.442 [249/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:11.442 [250/738] Generating lib/rte_table_def with a custom command 00:01:11.442 [251/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:11.442 [252/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:11.442 [253/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:11.442 [254/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:11.442 [255/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:11.442 [256/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:11.442 [257/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:11.442 [258/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.442 [259/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:11.442 [260/738] Generating lib/rte_pipeline_def with a custom command 00:01:11.442 [261/738] Generating lib/rte_pipeline_mingw with a custom command 00:01:11.442 [262/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:11.442 [263/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:11.442 [264/738] Linking static target lib/librte_net.a 00:01:11.442 [265/738] Generating lib/rte_graph_def with a custom command 00:01:11.442 [266/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:11.442 [267/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:11.442 [268/738] Linking static target lib/librte_latencystats.a 00:01:11.442 [269/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:11.442 [270/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:11.442 [271/738] Linking static target lib/librte_bbdev.a 00:01:11.442 [272/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:11.703 [273/738] Generating lib/rte_graph_mingw with a custom command 00:01:11.703 [274/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:11.703 [275/738] Linking static target lib/librte_bitratestats.a 00:01:11.703 [276/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:11.703 [277/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:11.703 [278/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:11.703 [279/738] Linking static target lib/librte_rawdev.a 00:01:11.703 [280/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:11.703 [281/738] Linking static target lib/librte_cmdline.a 00:01:11.703 [282/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:11.703 [283/738] Generating lib/rte_node_def with a custom command 00:01:11.703 [284/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:11.703 [285/738] Generating lib/rte_node_mingw with a custom command 00:01:11.703 [286/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:11.703 [287/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:11.703 [288/738] Generating drivers/rte_bus_pci_def with a custom command 00:01:11.703 [289/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:11.703 [290/738] Linking static target lib/librte_dmadev.a 00:01:11.703 [291/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:11.703 [292/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.703 [293/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:11.703 [294/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.703 [295/738] Linking static target lib/librte_gro.a 00:01:11.703 [296/738] Linking static target lib/librte_regexdev.a 00:01:11.703 [297/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:11.703 [298/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:11.703 [299/738] Generating drivers/rte_bus_vdev_def with a custom command 00:01:11.703 [300/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:11.703 [301/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:11.703 [302/738] Linking static target lib/librte_gpudev.a 00:01:11.703 [303/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.703 [304/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:11.703 [305/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:11.703 [306/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.703 [307/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.703 [308/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:11.703 [309/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.703 [310/738] Generating drivers/rte_mempool_ring_def with a custom command 00:01:11.703 [311/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:11.703 [312/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:11.703 [313/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:11.703 [314/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:11.703 [315/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:11.703 [316/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:11.703 [317/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:11.962 [318/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.962 [319/738] Linking target lib/librte_telemetry.so.23.0 00:01:11.962 [320/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:11.962 [321/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:11.962 [322/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:11.962 [323/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:11.962 [324/738] Linking static target lib/librte_compressdev.a 00:01:11.962 [325/738] Linking static target lib/librte_reorder.a 00:01:11.962 [326/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.962 [327/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:11.962 [328/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:11.962 [329/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.962 [330/738] Linking static target lib/librte_distributor.a 00:01:11.962 [331/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:11.962 [332/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:11.962 [333/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.962 [334/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:11.962 [335/738] Generating drivers/rte_net_i40e_def with a custom command 00:01:11.962 [336/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:11.962 [337/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:11.962 [338/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:11.962 [339/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:11.962 [340/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:11.962 [341/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:11.962 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:11.962 [343/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:11.962 [344/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:11.962 [345/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:11.962 [346/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:11.962 [347/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:11.962 [348/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:11.962 [349/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.962 [350/738] Linking static target lib/librte_power.a 00:01:11.962 [351/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:11.962 [352/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:11.962 [353/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:11.962 [354/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:11.962 [355/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:11.962 [356/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:11.962 [357/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:11.962 [358/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:11.962 [359/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:11.962 [360/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:11.962 [361/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:11.962 [362/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:12.222 [363/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:12.222 [364/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:12.222 [365/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:12.222 [366/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:12.222 [367/738] Linking static target lib/librte_ip_frag.a 00:01:12.222 [368/738] Linking static target lib/librte_rcu.a 00:01:12.222 [369/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:12.222 [370/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.222 [371/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:12.222 [372/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.222 [373/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:12.222 [374/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:12.222 [375/738] Linking static target lib/librte_security.a 00:01:12.222 [376/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:12.222 [377/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:12.222 [378/738] Linking static target lib/librte_mempool.a 00:01:12.222 [379/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:12.222 [380/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:12.222 [381/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:12.222 [382/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.222 [383/738] Linking static target lib/librte_gso.a 00:01:12.222 [384/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:12.222 [385/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:12.222 [386/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:12.222 [387/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:12.222 [388/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.222 [389/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:12.222 [390/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:12.222 [391/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.222 [392/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:12.222 [393/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:12.222 [394/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:12.222 [395/738] Linking static target lib/librte_lpm.a 00:01:12.222 [396/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:12.222 [397/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:12.222 [398/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.222 [399/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.222 [400/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:12.222 [401/738] Linking static target lib/librte_pcapng.a 00:01:12.222 [402/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.222 [403/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:12.222 [404/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.222 [405/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:12.222 [406/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.484 [407/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.484 [408/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:12.484 [409/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.484 [410/738] Linking static target lib/librte_graph.a 00:01:12.484 [411/738] Linking static target drivers/librte_bus_vdev.a 00:01:12.484 [412/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:12.484 [413/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:12.484 [414/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:12.484 [415/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.484 [416/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.484 [417/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:12.484 [418/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:12.484 [419/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:12.484 [420/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:12.484 [421/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.484 [422/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.484 [423/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:12.484 [424/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:12.484 [425/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:12.484 [426/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:12.484 [427/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:12.484 [428/738] Linking static target lib/librte_rib.a 00:01:12.484 [429/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:12.484 [430/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:12.484 [431/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.484 [432/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:12.484 [433/738] Linking static target lib/librte_bpf.a 00:01:12.484 [434/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:12.484 [435/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.484 [436/738] Linking static target lib/librte_eal.a 00:01:12.484 [437/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:12.484 [438/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.484 [439/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:12.484 [440/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:12.484 [441/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:12.484 [442/738] Linking static target lib/librte_efd.a 00:01:12.745 [443/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:12.745 [444/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:12.745 [445/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:12.745 [446/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:12.745 [447/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.745 [448/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:12.745 [449/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:12.745 [450/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:12.745 [451/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:12.745 [452/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.745 [453/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.745 [454/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:12.745 [455/738] Linking static target drivers/librte_bus_pci.a 00:01:12.745 [456/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:12.745 [457/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:12.745 [458/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:12.745 [459/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.745 [460/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.745 [461/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:12.745 [462/738] Linking static target lib/librte_fib.a 00:01:12.745 [463/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.745 [464/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.745 [465/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:12.745 [466/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.745 [467/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.745 [468/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:12.745 [469/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:12.745 [470/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:12.745 [471/738] Linking static target lib/librte_mbuf.a 00:01:12.745 [472/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:12.745 [473/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:12.745 [474/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:13.005 [475/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:13.005 [476/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:13.005 [477/738] Linking static target lib/librte_pdump.a 00:01:13.005 [478/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:13.005 [479/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [480/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:13.005 [481/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:13.005 [482/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:13.005 [483/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [484/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:13.005 [485/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:13.005 [486/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:13.005 [487/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [488/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:13.005 [489/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:13.005 [490/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:13.005 [491/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:13.005 [492/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:13.005 [493/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [494/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:13.005 [495/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:13.005 [496/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:13.005 [497/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:13.005 [498/738] Linking static target lib/librte_table.a 00:01:13.005 [499/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:13.005 [500/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:13.005 [501/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:13.005 [502/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [503/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:13.005 [504/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:13.005 [505/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:13.005 [506/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:13.005 [507/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:13.005 [508/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:13.005 [509/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:13.005 [510/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:13.005 [511/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:13.005 [512/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.005 [513/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:13.006 [514/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:13.006 [515/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:13.006 [516/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:13.006 [517/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:13.006 [518/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:13.006 [519/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:13.006 [520/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:13.006 [521/738] Linking static target lib/librte_node.a 00:01:13.267 [522/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.267 [523/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:13.267 [524/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.268 [525/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:13.268 [526/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.268 [527/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:13.268 [528/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:13.268 [529/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:13.268 [530/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.268 [531/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:13.268 [532/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:13.268 [533/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:13.268 [534/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:13.268 [535/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:13.268 [536/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:13.268 [537/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.268 [538/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:13.268 [539/738] Linking static target drivers/librte_mempool_ring.a 00:01:13.268 [540/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:13.268 [541/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:13.268 [542/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:13.268 [543/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:13.268 [544/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.268 [545/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:13.268 [546/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:13.268 [547/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:13.268 [548/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:13.268 [549/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:13.268 [550/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:13.268 [551/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:13.268 [552/738] Linking static target lib/librte_sched.a 00:01:13.268 [553/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:13.268 [554/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:13.268 [555/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:13.268 [556/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:13.268 [557/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:13.268 [558/738] Linking static target lib/librte_ipsec.a 00:01:13.268 [559/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:13.268 [560/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:13.268 [561/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.528 [562/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:13.528 [563/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:13.528 [564/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:13.528 [565/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:13.528 [566/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:13.528 [567/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:13.528 [568/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:13.528 [569/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:13.528 [570/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:13.528 [571/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:13.528 [572/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:13.528 [573/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:13.528 [574/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.528 [575/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.528 [576/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:13.528 [577/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:13.528 [578/738] Linking static target lib/librte_member.a 00:01:13.528 [579/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:13.528 [580/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:13.528 [581/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:13.528 [582/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:13.528 [583/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:13.528 [584/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:13.528 [585/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:13.528 [586/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:13.528 [587/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:13.788 [588/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:13.788 [589/738] Linking static target lib/librte_port.a 00:01:13.788 [590/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:13.788 [591/738] Linking static target lib/librte_cryptodev.a 00:01:13.788 [592/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:13.788 [593/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.788 [594/738] Linking static target lib/librte_hash.a 00:01:14.048 [595/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:14.048 [596/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:14.048 [597/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.048 [598/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:14.048 [599/738] Linking static target lib/librte_eventdev.a 00:01:14.048 [600/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:14.048 [601/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.048 [602/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:14.048 [603/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:14.048 [604/738] Linking static target lib/librte_ethdev.a 00:01:14.048 [605/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:14.048 [606/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.048 [607/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:14.048 [608/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:14.049 [609/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.049 [610/738] Linking static target lib/librte_acl.a 00:01:14.049 [611/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:14.309 [612/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:14.309 [613/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:14.568 [614/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.568 [615/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.828 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:14.828 [617/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:14.828 [618/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.088 [619/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:15.347 [620/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:15.347 [621/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:15.607 [622/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:15.607 [623/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:15.607 [624/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:15.866 [625/738] Linking static target drivers/librte_net_i40e.a 00:01:15.866 [626/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:16.435 [627/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:16.435 [628/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:16.695 [629/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.265 [630/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.265 [631/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.467 [632/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:21.467 [633/738] Linking static target lib/librte_pipeline.a 00:01:21.467 [634/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:21.467 [635/738] Linking static target lib/librte_vhost.a 00:01:21.766 [636/738] Linking target app/dpdk-dumpcap 00:01:21.766 [637/738] Linking target app/dpdk-test-regex 00:01:21.766 [638/738] Linking target app/dpdk-test-eventdev 00:01:21.766 [639/738] Linking target app/dpdk-test-compress-perf 00:01:21.766 [640/738] Linking target app/dpdk-test-acl 00:01:21.766 [641/738] Linking target app/dpdk-pdump 00:01:21.766 [642/738] Linking target app/dpdk-test-fib 00:01:21.766 [643/738] Linking target app/dpdk-test-security-perf 00:01:21.766 [644/738] Linking target app/dpdk-test-pipeline 00:01:21.766 [645/738] Linking target app/dpdk-test-crypto-perf 00:01:21.766 [646/738] Linking target app/dpdk-test-bbdev 00:01:21.766 [647/738] Linking target app/dpdk-test-sad 00:01:21.766 [648/738] Linking target app/dpdk-proc-info 00:01:21.766 [649/738] Linking target app/dpdk-test-cmdline 00:01:21.766 [650/738] Linking target app/dpdk-test-gpudev 00:01:21.766 [651/738] Linking target app/dpdk-test-flow-perf 00:01:21.766 [652/738] Linking target app/dpdk-testpmd 00:01:22.028 [653/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.415 [654/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.331 [655/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.331 [656/738] Linking target lib/librte_eal.so.23.0 00:01:25.331 [657/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:25.331 [658/738] Linking target lib/librte_meter.so.23.0 00:01:25.331 [659/738] Linking target lib/librte_rawdev.so.23.0 00:01:25.331 [660/738] Linking target lib/librte_pci.so.23.0 00:01:25.331 [661/738] Linking target lib/librte_ring.so.23.0 00:01:25.331 [662/738] Linking target lib/librte_graph.so.23.0 00:01:25.331 [663/738] Linking target lib/librte_timer.so.23.0 00:01:25.331 [664/738] Linking target lib/librte_acl.so.23.0 00:01:25.331 [665/738] Linking target lib/librte_cfgfile.so.23.0 00:01:25.331 [666/738] Linking target lib/librte_jobstats.so.23.0 00:01:25.331 [667/738] Linking target lib/librte_stack.so.23.0 00:01:25.331 [668/738] Linking target lib/librte_dmadev.so.23.0 00:01:25.331 [669/738] Linking target drivers/librte_bus_vdev.so.23.0 00:01:25.331 [670/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:25.331 [671/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:25.331 [672/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:25.331 [673/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:25.331 [674/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:25.331 [675/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:25.331 [676/738] Linking target lib/librte_rcu.so.23.0 00:01:25.331 [677/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:25.331 [678/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:25.331 [679/738] Linking target lib/librte_mempool.so.23.0 00:01:25.331 [680/738] Linking target drivers/librte_bus_pci.so.23.0 00:01:25.592 [681/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:25.592 [682/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:25.592 [683/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:25.592 [684/738] Linking target lib/librte_mbuf.so.23.0 00:01:25.592 [685/738] Linking target lib/librte_rib.so.23.0 00:01:25.592 [686/738] Linking target drivers/librte_mempool_ring.so.23.0 00:01:25.854 [687/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:25.854 [688/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:25.854 [689/738] Linking target lib/librte_fib.so.23.0 00:01:25.854 [690/738] Linking target lib/librte_bbdev.so.23.0 00:01:25.854 [691/738] Linking target lib/librte_regexdev.so.23.0 00:01:25.854 [692/738] Linking target lib/librte_net.so.23.0 00:01:25.854 [693/738] Linking target lib/librte_compressdev.so.23.0 00:01:25.854 [694/738] Linking target lib/librte_distributor.so.23.0 00:01:25.854 [695/738] Linking target lib/librte_gpudev.so.23.0 00:01:25.854 [696/738] Linking target lib/librte_reorder.so.23.0 00:01:25.854 [697/738] Linking target lib/librte_sched.so.23.0 00:01:25.854 [698/738] Linking target lib/librte_cryptodev.so.23.0 00:01:25.854 [699/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:25.854 [700/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:25.854 [701/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.854 [702/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:26.115 [703/738] Linking target lib/librte_hash.so.23.0 00:01:26.115 [704/738] Linking target lib/librte_cmdline.so.23.0 00:01:26.115 [705/738] Linking target lib/librte_security.so.23.0 00:01:26.115 [706/738] Linking target lib/librte_ethdev.so.23.0 00:01:26.115 [707/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:26.115 [708/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:26.115 [709/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:26.115 [710/738] Linking target lib/librte_efd.so.23.0 00:01:26.115 [711/738] Linking target lib/librte_lpm.so.23.0 00:01:26.115 [712/738] Linking target lib/librte_member.so.23.0 00:01:26.115 [713/738] Linking target lib/librte_metrics.so.23.0 00:01:26.115 [714/738] Linking target lib/librte_ipsec.so.23.0 00:01:26.115 [715/738] Linking target lib/librte_pcapng.so.23.0 00:01:26.375 [716/738] Linking target lib/librte_gso.so.23.0 00:01:26.375 [717/738] Linking target lib/librte_gro.so.23.0 00:01:26.375 [718/738] Linking target lib/librte_ip_frag.so.23.0 00:01:26.375 [719/738] Linking target lib/librte_power.so.23.0 00:01:26.375 [720/738] Linking target lib/librte_bpf.so.23.0 00:01:26.375 [721/738] Linking target lib/librte_eventdev.so.23.0 00:01:26.375 [722/738] Linking target lib/librte_vhost.so.23.0 00:01:26.375 [723/738] Linking target drivers/librte_net_i40e.so.23.0 00:01:26.375 [724/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:26.375 [725/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:26.375 [726/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:26.375 [727/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:26.375 [728/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:26.375 [729/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:26.375 [730/738] Linking target lib/librte_node.so.23.0 00:01:26.375 [731/738] Linking target lib/librte_bitratestats.so.23.0 00:01:26.375 [732/738] Linking target lib/librte_latencystats.so.23.0 00:01:26.375 [733/738] Linking target lib/librte_pdump.so.23.0 00:01:26.375 [734/738] Linking target lib/librte_port.so.23.0 00:01:26.637 [735/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:26.637 [736/738] Linking target lib/librte_table.so.23.0 00:01:26.898 [737/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:26.898 [738/738] Linking target lib/librte_pipeline.so.23.0 00:01:26.898 14:03:50 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:26.898 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:26.898 [0/1] Installing files. 00:01:27.163 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.163 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.164 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.165 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.166 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.167 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.168 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.169 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.169 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.169 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.170 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:27.435 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:27.435 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:27.435 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.435 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:27.435 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.435 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.436 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.437 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.438 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:27.439 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:27.439 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:01:27.439 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:27.439 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:01:27.439 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:27.439 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:01:27.439 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:27.439 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:01:27.439 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:27.439 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:01:27.439 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:27.439 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:01:27.439 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:27.439 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:01:27.439 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:27.439 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:01:27.439 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:27.439 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:01:27.439 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:27.439 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:01:27.439 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:27.439 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:01:27.439 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:27.439 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:01:27.439 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:27.439 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:01:27.439 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:27.439 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:01:27.439 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:27.439 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:01:27.439 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:27.439 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:01:27.439 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:27.439 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:01:27.439 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:27.439 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:01:27.439 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:27.439 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:01:27.439 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:27.439 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:01:27.439 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:27.439 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:01:27.439 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:27.439 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:01:27.439 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:27.439 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:01:27.439 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:27.439 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:01:27.439 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:27.439 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:01:27.439 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:27.439 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:01:27.439 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:27.439 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:01:27.439 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:27.439 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:01:27.439 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:27.439 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:01:27.439 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:27.439 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:01:27.439 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:27.439 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:01:27.439 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:27.439 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:01:27.439 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:27.439 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:01:27.439 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:27.439 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:01:27.440 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:27.440 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:01:27.440 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:27.440 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:01:27.440 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:27.440 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:01:27.440 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:27.440 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:01:27.440 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:27.440 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:01:27.440 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:27.440 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:01:27.440 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:27.440 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:01:27.440 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:27.440 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:01:27.440 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:27.440 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:01:27.440 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:27.440 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:01:27.440 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:27.440 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:01:27.440 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:27.440 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:01:27.440 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:27.440 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:01:27.440 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:27.440 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:01:27.440 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:27.440 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:01:27.440 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:27.440 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:01:27.440 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:27.440 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:01:27.440 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:27.440 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:01:27.440 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:27.440 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:01:27.440 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:01:27.440 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:01:27.440 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:01:27.440 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:01:27.440 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:01:27.440 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:01:27.440 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:01:27.440 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:01:27.440 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:01:27.440 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:01:27.440 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:01:27.440 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:01:27.440 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:01:27.440 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:01:27.440 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:01:27.440 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:01:27.440 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:01:27.440 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:01:27.440 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:01:27.440 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:01:27.440 14:03:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:01:27.440 14:03:51 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:27.440 14:03:51 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:01:27.440 14:03:51 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.440 00:01:27.440 real 0m23.593s 00:01:27.440 user 5m52.948s 00:01:27.440 sys 2m34.847s 00:01:27.440 14:03:51 build_native_dpdk -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:27.440 14:03:51 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:27.440 ************************************ 00:01:27.440 END TEST build_native_dpdk 00:01:27.440 ************************************ 00:01:27.701 14:03:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.701 14:03:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.701 14:03:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:27.701 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:27.960 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:27.960 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:27.960 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:28.221 Using 'verbs' RDMA provider 00:01:44.075 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:56.346 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:56.346 Creating mk/config.mk...done. 00:01:56.346 Creating mk/cc.flags.mk...done. 00:01:56.346 Type 'make' to build. 00:01:56.346 14:04:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:56.346 14:04:19 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:56.346 14:04:19 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:56.346 14:04:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.346 ************************************ 00:01:56.346 START TEST make 00:01:56.346 ************************************ 00:01:56.346 14:04:19 make -- common/autotest_common.sh@1124 -- $ make -j144 00:01:56.346 make[1]: Nothing to be done for 'all'. 00:01:57.289 The Meson build system 00:01:57.289 Version: 1.3.1 00:01:57.289 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:57.289 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.289 Build type: native build 00:01:57.289 Project name: libvfio-user 00:01:57.289 Project version: 0.0.1 00:01:57.289 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.289 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:57.289 Host machine cpu family: x86_64 00:01:57.289 Host machine cpu: x86_64 00:01:57.289 Run-time dependency threads found: YES 00:01:57.289 Library dl found: YES 00:01:57.289 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.289 Run-time dependency json-c found: YES 0.17 00:01:57.289 Run-time dependency cmocka found: YES 1.1.7 00:01:57.289 Program pytest-3 found: NO 00:01:57.289 Program flake8 found: NO 00:01:57.289 Program misspell-fixer found: NO 00:01:57.290 Program restructuredtext-lint found: NO 00:01:57.290 Program valgrind found: YES (/usr/bin/valgrind) 00:01:57.290 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.290 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.290 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.290 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.290 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:57.290 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:57.290 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.290 Build targets in project: 8 00:01:57.290 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:57.290 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:57.290 00:01:57.290 libvfio-user 0.0.1 00:01:57.290 00:01:57.290 User defined options 00:01:57.290 buildtype : debug 00:01:57.290 default_library: shared 00:01:57.290 libdir : /usr/local/lib 00:01:57.290 00:01:57.290 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.805 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:57.805 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:57.805 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:57.805 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:57.805 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:57.805 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:57.805 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:57.805 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:57.805 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:57.805 [10/37] Compiling C object samples/null.p/null.c.o 00:01:57.805 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:57.805 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:57.805 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:57.805 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:57.805 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:57.805 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:57.805 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:57.805 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:57.805 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:57.805 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:57.805 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:57.805 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:57.805 [23/37] Compiling C object samples/server.p/server.c.o 00:01:57.805 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:57.805 [25/37] Compiling C object samples/client.p/client.c.o 00:01:57.805 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:57.805 [27/37] Linking target samples/client 00:01:57.805 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:57.805 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:58.063 [30/37] Linking target test/unit_tests 00:01:58.063 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:58.063 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:58.063 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:58.063 [34/37] Linking target samples/lspci 00:01:58.063 [35/37] Linking target samples/server 00:01:58.063 [36/37] Linking target samples/null 00:01:58.063 [37/37] Linking target samples/gpio-pci-idio-16 00:01:58.063 INFO: autodetecting backend as ninja 00:01:58.063 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.063 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.324 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.324 ninja: no work to do. 00:02:04.911 CC lib/ut/ut.o 00:02:05.172 CC lib/log/log.o 00:02:05.172 CC lib/log/log_flags.o 00:02:05.172 CC lib/log/log_deprecated.o 00:02:05.172 CC lib/ut_mock/mock.o 00:02:05.172 LIB libspdk_ut.a 00:02:05.172 LIB libspdk_log.a 00:02:05.172 SO libspdk_ut.so.2.0 00:02:05.172 LIB libspdk_ut_mock.a 00:02:05.172 SO libspdk_ut_mock.so.6.0 00:02:05.172 SO libspdk_log.so.7.0 00:02:05.433 SYMLINK libspdk_ut.so 00:02:05.433 SYMLINK libspdk_ut_mock.so 00:02:05.433 SYMLINK libspdk_log.so 00:02:05.693 CXX lib/trace_parser/trace.o 00:02:05.693 CC lib/dma/dma.o 00:02:05.693 CC lib/ioat/ioat.o 00:02:05.693 CC lib/util/base64.o 00:02:05.693 CC lib/util/bit_array.o 00:02:05.693 CC lib/util/cpuset.o 00:02:05.693 CC lib/util/crc16.o 00:02:05.693 CC lib/util/crc32.o 00:02:05.693 CC lib/util/crc64.o 00:02:05.693 CC lib/util/crc32c.o 00:02:05.693 CC lib/util/crc32_ieee.o 00:02:05.693 CC lib/util/fd.o 00:02:05.693 CC lib/util/dif.o 00:02:05.693 CC lib/util/file.o 00:02:05.693 CC lib/util/hexlify.o 00:02:05.693 CC lib/util/iov.o 00:02:05.693 CC lib/util/math.o 00:02:05.693 CC lib/util/pipe.o 00:02:05.693 CC lib/util/strerror_tls.o 00:02:05.693 CC lib/util/string.o 00:02:05.693 CC lib/util/uuid.o 00:02:05.693 CC lib/util/fd_group.o 00:02:05.693 CC lib/util/xor.o 00:02:05.693 CC lib/util/zipf.o 00:02:05.954 CC lib/vfio_user/host/vfio_user.o 00:02:05.954 CC lib/vfio_user/host/vfio_user_pci.o 00:02:05.954 LIB libspdk_dma.a 00:02:05.954 SO libspdk_dma.so.4.0 00:02:05.954 LIB libspdk_ioat.a 00:02:05.954 SYMLINK libspdk_dma.so 00:02:05.954 SO libspdk_ioat.so.7.0 00:02:06.214 SYMLINK libspdk_ioat.so 00:02:06.214 LIB libspdk_vfio_user.a 00:02:06.214 SO libspdk_vfio_user.so.5.0 00:02:06.214 LIB libspdk_util.a 00:02:06.214 SYMLINK libspdk_vfio_user.so 00:02:06.214 SO libspdk_util.so.9.0 00:02:06.475 SYMLINK libspdk_util.so 00:02:06.475 LIB libspdk_trace_parser.a 00:02:06.475 SO libspdk_trace_parser.so.5.0 00:02:06.735 SYMLINK libspdk_trace_parser.so 00:02:06.735 CC lib/conf/conf.o 00:02:06.735 CC lib/json/json_parse.o 00:02:06.735 CC lib/json/json_util.o 00:02:06.735 CC lib/idxd/idxd.o 00:02:06.735 CC lib/idxd/idxd_user.o 00:02:06.735 CC lib/json/json_write.o 00:02:06.735 CC lib/env_dpdk/env.o 00:02:06.735 CC lib/idxd/idxd_kernel.o 00:02:06.735 CC lib/env_dpdk/memory.o 00:02:06.735 CC lib/rdma/common.o 00:02:06.735 CC lib/env_dpdk/pci.o 00:02:06.735 CC lib/rdma/rdma_verbs.o 00:02:06.735 CC lib/env_dpdk/init.o 00:02:06.735 CC lib/env_dpdk/threads.o 00:02:06.735 CC lib/vmd/vmd.o 00:02:06.735 CC lib/env_dpdk/pci_ioat.o 00:02:06.735 CC lib/vmd/led.o 00:02:06.735 CC lib/env_dpdk/pci_virtio.o 00:02:06.735 CC lib/env_dpdk/pci_vmd.o 00:02:06.735 CC lib/env_dpdk/pci_idxd.o 00:02:06.735 CC lib/env_dpdk/pci_event.o 00:02:06.736 CC lib/env_dpdk/sigbus_handler.o 00:02:06.736 CC lib/env_dpdk/pci_dpdk.o 00:02:06.736 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.736 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.996 LIB libspdk_conf.a 00:02:06.996 SO libspdk_conf.so.6.0 00:02:06.996 LIB libspdk_json.a 00:02:06.996 LIB libspdk_rdma.a 00:02:06.996 SO libspdk_rdma.so.6.0 00:02:06.996 SO libspdk_json.so.6.0 00:02:07.258 SYMLINK libspdk_conf.so 00:02:07.258 SYMLINK libspdk_rdma.so 00:02:07.258 SYMLINK libspdk_json.so 00:02:07.258 LIB libspdk_idxd.a 00:02:07.258 SO libspdk_idxd.so.12.0 00:02:07.258 LIB libspdk_vmd.a 00:02:07.519 SYMLINK libspdk_idxd.so 00:02:07.519 SO libspdk_vmd.so.6.0 00:02:07.519 SYMLINK libspdk_vmd.so 00:02:07.519 CC lib/jsonrpc/jsonrpc_server.o 00:02:07.519 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:07.519 CC lib/jsonrpc/jsonrpc_client.o 00:02:07.519 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:07.780 LIB libspdk_jsonrpc.a 00:02:07.780 SO libspdk_jsonrpc.so.6.0 00:02:08.040 SYMLINK libspdk_jsonrpc.so 00:02:08.040 LIB libspdk_env_dpdk.a 00:02:08.040 SO libspdk_env_dpdk.so.14.1 00:02:08.302 SYMLINK libspdk_env_dpdk.so 00:02:08.302 CC lib/rpc/rpc.o 00:02:08.562 LIB libspdk_rpc.a 00:02:08.562 SO libspdk_rpc.so.6.0 00:02:08.562 SYMLINK libspdk_rpc.so 00:02:08.823 CC lib/notify/notify.o 00:02:08.823 CC lib/notify/notify_rpc.o 00:02:09.085 CC lib/keyring/keyring_rpc.o 00:02:09.085 CC lib/keyring/keyring.o 00:02:09.085 CC lib/trace/trace.o 00:02:09.085 CC lib/trace/trace_flags.o 00:02:09.085 CC lib/trace/trace_rpc.o 00:02:09.085 LIB libspdk_notify.a 00:02:09.085 SO libspdk_notify.so.6.0 00:02:09.085 LIB libspdk_keyring.a 00:02:09.085 LIB libspdk_trace.a 00:02:09.346 SO libspdk_keyring.so.1.0 00:02:09.346 SYMLINK libspdk_notify.so 00:02:09.346 SO libspdk_trace.so.10.0 00:02:09.346 SYMLINK libspdk_keyring.so 00:02:09.346 SYMLINK libspdk_trace.so 00:02:09.609 CC lib/sock/sock.o 00:02:09.609 CC lib/sock/sock_rpc.o 00:02:09.609 CC lib/thread/thread.o 00:02:09.609 CC lib/thread/iobuf.o 00:02:10.180 LIB libspdk_sock.a 00:02:10.180 SO libspdk_sock.so.9.0 00:02:10.180 SYMLINK libspdk_sock.so 00:02:10.441 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.441 CC lib/nvme/nvme_ctrlr.o 00:02:10.441 CC lib/nvme/nvme_fabric.o 00:02:10.441 CC lib/nvme/nvme_ns_cmd.o 00:02:10.441 CC lib/nvme/nvme_ns.o 00:02:10.441 CC lib/nvme/nvme_pcie_common.o 00:02:10.441 CC lib/nvme/nvme_pcie.o 00:02:10.441 CC lib/nvme/nvme_qpair.o 00:02:10.441 CC lib/nvme/nvme.o 00:02:10.441 CC lib/nvme/nvme_quirks.o 00:02:10.441 CC lib/nvme/nvme_transport.o 00:02:10.441 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.441 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.441 CC lib/nvme/nvme_discovery.o 00:02:10.441 CC lib/nvme/nvme_tcp.o 00:02:10.441 CC lib/nvme/nvme_opal.o 00:02:10.441 CC lib/nvme/nvme_io_msg.o 00:02:10.441 CC lib/nvme/nvme_poll_group.o 00:02:10.441 CC lib/nvme/nvme_zns.o 00:02:10.441 CC lib/nvme/nvme_stubs.o 00:02:10.441 CC lib/nvme/nvme_auth.o 00:02:10.441 CC lib/nvme/nvme_cuse.o 00:02:10.441 CC lib/nvme/nvme_vfio_user.o 00:02:10.441 CC lib/nvme/nvme_rdma.o 00:02:11.012 LIB libspdk_thread.a 00:02:11.012 SO libspdk_thread.so.10.0 00:02:11.012 SYMLINK libspdk_thread.so 00:02:11.273 CC lib/init/json_config.o 00:02:11.273 CC lib/init/subsystem.o 00:02:11.273 CC lib/init/subsystem_rpc.o 00:02:11.273 CC lib/init/rpc.o 00:02:11.273 CC lib/accel/accel.o 00:02:11.273 CC lib/accel/accel_rpc.o 00:02:11.273 CC lib/accel/accel_sw.o 00:02:11.273 CC lib/vfu_tgt/tgt_endpoint.o 00:02:11.273 CC lib/vfu_tgt/tgt_rpc.o 00:02:11.273 CC lib/blob/blobstore.o 00:02:11.273 CC lib/blob/request.o 00:02:11.273 CC lib/blob/zeroes.o 00:02:11.273 CC lib/blob/blob_bs_dev.o 00:02:11.273 CC lib/virtio/virtio.o 00:02:11.273 CC lib/virtio/virtio_vhost_user.o 00:02:11.273 CC lib/virtio/virtio_vfio_user.o 00:02:11.273 CC lib/virtio/virtio_pci.o 00:02:11.534 LIB libspdk_init.a 00:02:11.534 SO libspdk_init.so.5.0 00:02:11.534 SYMLINK libspdk_init.so 00:02:11.534 LIB libspdk_vfu_tgt.a 00:02:11.794 LIB libspdk_virtio.a 00:02:11.794 SO libspdk_vfu_tgt.so.3.0 00:02:11.794 SO libspdk_virtio.so.7.0 00:02:11.794 SYMLINK libspdk_vfu_tgt.so 00:02:11.794 SYMLINK libspdk_virtio.so 00:02:12.055 CC lib/event/app.o 00:02:12.055 CC lib/event/reactor.o 00:02:12.055 CC lib/event/log_rpc.o 00:02:12.055 CC lib/event/app_rpc.o 00:02:12.055 CC lib/event/scheduler_static.o 00:02:12.316 LIB libspdk_accel.a 00:02:12.316 SO libspdk_accel.so.15.0 00:02:12.316 LIB libspdk_nvme.a 00:02:12.316 SYMLINK libspdk_accel.so 00:02:12.316 LIB libspdk_event.a 00:02:12.316 SO libspdk_nvme.so.13.0 00:02:12.316 SO libspdk_event.so.13.1 00:02:12.577 SYMLINK libspdk_event.so 00:02:12.577 CC lib/bdev/bdev.o 00:02:12.577 CC lib/bdev/bdev_rpc.o 00:02:12.577 CC lib/bdev/bdev_zone.o 00:02:12.577 CC lib/bdev/part.o 00:02:12.577 CC lib/bdev/scsi_nvme.o 00:02:12.577 SYMLINK libspdk_nvme.so 00:02:13.965 LIB libspdk_blob.a 00:02:13.965 SO libspdk_blob.so.11.0 00:02:13.965 SYMLINK libspdk_blob.so 00:02:14.536 CC lib/lvol/lvol.o 00:02:14.536 CC lib/blobfs/blobfs.o 00:02:14.536 CC lib/blobfs/tree.o 00:02:14.797 LIB libspdk_bdev.a 00:02:15.095 SO libspdk_bdev.so.15.0 00:02:15.095 SYMLINK libspdk_bdev.so 00:02:15.095 LIB libspdk_blobfs.a 00:02:15.095 SO libspdk_blobfs.so.10.0 00:02:15.095 LIB libspdk_lvol.a 00:02:15.095 SO libspdk_lvol.so.10.0 00:02:15.095 SYMLINK libspdk_blobfs.so 00:02:15.355 SYMLINK libspdk_lvol.so 00:02:15.355 CC lib/nvmf/ctrlr.o 00:02:15.355 CC lib/nvmf/ctrlr_discovery.o 00:02:15.355 CC lib/nvmf/ctrlr_bdev.o 00:02:15.355 CC lib/nbd/nbd.o 00:02:15.355 CC lib/nvmf/nvmf_rpc.o 00:02:15.355 CC lib/scsi/dev.o 00:02:15.355 CC lib/nvmf/subsystem.o 00:02:15.355 CC lib/ftl/ftl_core.o 00:02:15.355 CC lib/scsi/port.o 00:02:15.355 CC lib/nbd/nbd_rpc.o 00:02:15.355 CC lib/ftl/ftl_init.o 00:02:15.355 CC lib/scsi/lun.o 00:02:15.355 CC lib/nvmf/nvmf.o 00:02:15.355 CC lib/ftl/ftl_layout.o 00:02:15.355 CC lib/scsi/scsi.o 00:02:15.355 CC lib/nvmf/transport.o 00:02:15.355 CC lib/ftl/ftl_debug.o 00:02:15.355 CC lib/scsi/scsi_bdev.o 00:02:15.355 CC lib/nvmf/tcp.o 00:02:15.355 CC lib/ftl/ftl_io.o 00:02:15.355 CC lib/scsi/scsi_pr.o 00:02:15.355 CC lib/nvmf/stubs.o 00:02:15.355 CC lib/ftl/ftl_sb.o 00:02:15.355 CC lib/ublk/ublk.o 00:02:15.355 CC lib/scsi/scsi_rpc.o 00:02:15.355 CC lib/nvmf/mdns_server.o 00:02:15.355 CC lib/ublk/ublk_rpc.o 00:02:15.355 CC lib/ftl/ftl_l2p.o 00:02:15.355 CC lib/scsi/task.o 00:02:15.355 CC lib/nvmf/vfio_user.o 00:02:15.355 CC lib/ftl/ftl_l2p_flat.o 00:02:15.355 CC lib/nvmf/rdma.o 00:02:15.355 CC lib/ftl/ftl_nv_cache.o 00:02:15.355 CC lib/ftl/ftl_band.o 00:02:15.355 CC lib/nvmf/auth.o 00:02:15.355 CC lib/ftl/ftl_band_ops.o 00:02:15.355 CC lib/ftl/ftl_writer.o 00:02:15.355 CC lib/ftl/ftl_rq.o 00:02:15.355 CC lib/ftl/ftl_reloc.o 00:02:15.355 CC lib/ftl/ftl_l2p_cache.o 00:02:15.355 CC lib/ftl/ftl_p2l.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.355 CC lib/ftl/utils/ftl_conf.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.355 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.615 CC lib/ftl/utils/ftl_md.o 00:02:15.615 CC lib/ftl/utils/ftl_mempool.o 00:02:15.615 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.615 CC lib/ftl/utils/ftl_property.o 00:02:15.615 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.615 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.615 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.615 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.615 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.615 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.615 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.615 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.615 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.615 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:15.615 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.615 CC lib/ftl/base/ftl_base_dev.o 00:02:15.615 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.615 CC lib/ftl/ftl_trace.o 00:02:15.875 LIB libspdk_nbd.a 00:02:15.875 SO libspdk_nbd.so.7.0 00:02:15.875 SYMLINK libspdk_nbd.so 00:02:16.135 LIB libspdk_scsi.a 00:02:16.135 SO libspdk_scsi.so.9.0 00:02:16.135 LIB libspdk_ublk.a 00:02:16.135 SO libspdk_ublk.so.3.0 00:02:16.135 SYMLINK libspdk_scsi.so 00:02:16.396 SYMLINK libspdk_ublk.so 00:02:16.396 LIB libspdk_ftl.a 00:02:16.657 CC lib/vhost/vhost.o 00:02:16.657 CC lib/iscsi/conn.o 00:02:16.657 CC lib/vhost/vhost_rpc.o 00:02:16.657 CC lib/iscsi/init_grp.o 00:02:16.657 CC lib/vhost/vhost_scsi.o 00:02:16.657 CC lib/iscsi/iscsi.o 00:02:16.657 CC lib/iscsi/md5.o 00:02:16.657 CC lib/vhost/vhost_blk.o 00:02:16.657 CC lib/iscsi/param.o 00:02:16.657 CC lib/iscsi/portal_grp.o 00:02:16.657 CC lib/vhost/rte_vhost_user.o 00:02:16.657 CC lib/iscsi/tgt_node.o 00:02:16.657 CC lib/iscsi/iscsi_subsystem.o 00:02:16.657 CC lib/iscsi/iscsi_rpc.o 00:02:16.657 CC lib/iscsi/task.o 00:02:16.657 SO libspdk_ftl.so.9.0 00:02:16.918 SYMLINK libspdk_ftl.so 00:02:17.490 LIB libspdk_nvmf.a 00:02:17.490 SO libspdk_nvmf.so.18.1 00:02:17.490 LIB libspdk_vhost.a 00:02:17.490 SO libspdk_vhost.so.8.0 00:02:17.750 SYMLINK libspdk_nvmf.so 00:02:17.750 SYMLINK libspdk_vhost.so 00:02:17.750 LIB libspdk_iscsi.a 00:02:17.750 SO libspdk_iscsi.so.8.0 00:02:18.011 SYMLINK libspdk_iscsi.so 00:02:18.584 CC module/vfu_device/vfu_virtio.o 00:02:18.584 CC module/vfu_device/vfu_virtio_blk.o 00:02:18.584 CC module/vfu_device/vfu_virtio_scsi.o 00:02:18.584 CC module/vfu_device/vfu_virtio_rpc.o 00:02:18.584 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.584 CC module/accel/ioat/accel_ioat.o 00:02:18.584 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.584 CC module/accel/iaa/accel_iaa.o 00:02:18.584 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.584 CC module/accel/error/accel_error.o 00:02:18.584 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.584 CC module/accel/error/accel_error_rpc.o 00:02:18.584 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.584 CC module/sock/posix/posix.o 00:02:18.584 CC module/accel/dsa/accel_dsa.o 00:02:18.584 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.584 LIB libspdk_env_dpdk_rpc.a 00:02:18.584 CC module/blob/bdev/blob_bdev.o 00:02:18.584 CC module/keyring/file/keyring.o 00:02:18.584 CC module/keyring/file/keyring_rpc.o 00:02:18.584 CC module/keyring/linux/keyring.o 00:02:18.584 CC module/keyring/linux/keyring_rpc.o 00:02:18.584 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.584 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.843 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.843 LIB libspdk_accel_iaa.a 00:02:18.843 LIB libspdk_scheduler_gscheduler.a 00:02:18.843 LIB libspdk_keyring_file.a 00:02:18.843 LIB libspdk_accel_ioat.a 00:02:18.843 LIB libspdk_keyring_linux.a 00:02:18.843 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.843 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.843 SO libspdk_accel_iaa.so.3.0 00:02:18.843 SO libspdk_accel_ioat.so.6.0 00:02:18.843 SO libspdk_keyring_file.so.1.0 00:02:18.843 LIB libspdk_accel_error.a 00:02:18.843 SO libspdk_keyring_linux.so.1.0 00:02:18.843 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.843 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.843 LIB libspdk_scheduler_dynamic.a 00:02:18.843 SO libspdk_accel_error.so.2.0 00:02:18.843 SYMLINK libspdk_accel_iaa.so 00:02:18.843 LIB libspdk_accel_dsa.a 00:02:18.843 SYMLINK libspdk_accel_ioat.so 00:02:18.843 LIB libspdk_blob_bdev.a 00:02:18.843 SYMLINK libspdk_keyring_file.so 00:02:18.843 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.843 SYMLINK libspdk_keyring_linux.so 00:02:18.843 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.843 SO libspdk_accel_dsa.so.5.0 00:02:19.104 SO libspdk_blob_bdev.so.11.0 00:02:19.104 SYMLINK libspdk_accel_error.so 00:02:19.104 SYMLINK libspdk_scheduler_dynamic.so 00:02:19.104 SYMLINK libspdk_accel_dsa.so 00:02:19.104 LIB libspdk_vfu_device.a 00:02:19.104 SYMLINK libspdk_blob_bdev.so 00:02:19.104 SO libspdk_vfu_device.so.3.0 00:02:19.104 SYMLINK libspdk_vfu_device.so 00:02:19.364 LIB libspdk_sock_posix.a 00:02:19.364 SO libspdk_sock_posix.so.6.0 00:02:19.364 SYMLINK libspdk_sock_posix.so 00:02:19.624 CC module/bdev/gpt/gpt.o 00:02:19.624 CC module/bdev/gpt/vbdev_gpt.o 00:02:19.624 CC module/bdev/aio/bdev_aio.o 00:02:19.624 CC module/bdev/malloc/bdev_malloc.o 00:02:19.624 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:19.624 CC module/bdev/aio/bdev_aio_rpc.o 00:02:19.624 CC module/blobfs/bdev/blobfs_bdev.o 00:02:19.624 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:19.624 CC module/bdev/null/bdev_null.o 00:02:19.624 CC module/bdev/raid/bdev_raid.o 00:02:19.624 CC module/bdev/null/bdev_null_rpc.o 00:02:19.624 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.624 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.624 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:19.624 CC module/bdev/raid/raid0.o 00:02:19.624 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:19.624 CC module/bdev/error/vbdev_error.o 00:02:19.624 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:19.624 CC module/bdev/delay/vbdev_delay.o 00:02:19.624 CC module/bdev/lvol/vbdev_lvol.o 00:02:19.624 CC module/bdev/raid/raid1.o 00:02:19.624 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.624 CC module/bdev/error/vbdev_error_rpc.o 00:02:19.624 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:19.624 CC module/bdev/raid/concat.o 00:02:19.624 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:19.624 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.624 CC module/bdev/nvme/bdev_nvme.o 00:02:19.624 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.624 CC module/bdev/split/vbdev_split.o 00:02:19.624 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.624 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:19.624 CC module/bdev/split/vbdev_split_rpc.o 00:02:19.624 CC module/bdev/nvme/nvme_rpc.o 00:02:19.624 CC module/bdev/ftl/bdev_ftl.o 00:02:19.624 CC module/bdev/nvme/bdev_mdns_client.o 00:02:19.624 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:19.624 CC module/bdev/nvme/vbdev_opal.o 00:02:19.624 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.624 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.624 CC module/bdev/passthru/vbdev_passthru.o 00:02:19.624 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:19.884 LIB libspdk_blobfs_bdev.a 00:02:19.884 SO libspdk_blobfs_bdev.so.6.0 00:02:19.884 LIB libspdk_bdev_gpt.a 00:02:19.884 LIB libspdk_bdev_error.a 00:02:19.884 LIB libspdk_bdev_split.a 00:02:19.884 LIB libspdk_bdev_null.a 00:02:19.884 SO libspdk_bdev_gpt.so.6.0 00:02:19.884 SYMLINK libspdk_blobfs_bdev.so 00:02:19.884 LIB libspdk_bdev_ftl.a 00:02:19.884 SO libspdk_bdev_split.so.6.0 00:02:19.884 SO libspdk_bdev_error.so.6.0 00:02:19.884 SO libspdk_bdev_null.so.6.0 00:02:19.884 LIB libspdk_bdev_aio.a 00:02:19.884 LIB libspdk_bdev_zone_block.a 00:02:19.884 LIB libspdk_bdev_malloc.a 00:02:19.884 SO libspdk_bdev_ftl.so.6.0 00:02:19.884 LIB libspdk_bdev_passthru.a 00:02:19.884 LIB libspdk_bdev_delay.a 00:02:20.144 SO libspdk_bdev_malloc.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_gpt.so 00:02:20.144 SO libspdk_bdev_aio.so.6.0 00:02:20.144 LIB libspdk_bdev_iscsi.a 00:02:20.144 SO libspdk_bdev_zone_block.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_null.so 00:02:20.144 SO libspdk_bdev_passthru.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_split.so 00:02:20.144 SYMLINK libspdk_bdev_error.so 00:02:20.144 SO libspdk_bdev_delay.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_ftl.so 00:02:20.144 SO libspdk_bdev_iscsi.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_aio.so 00:02:20.144 SYMLINK libspdk_bdev_malloc.so 00:02:20.144 SYMLINK libspdk_bdev_zone_block.so 00:02:20.144 SYMLINK libspdk_bdev_passthru.so 00:02:20.144 SYMLINK libspdk_bdev_delay.so 00:02:20.144 LIB libspdk_bdev_lvol.a 00:02:20.144 SYMLINK libspdk_bdev_iscsi.so 00:02:20.144 LIB libspdk_bdev_virtio.a 00:02:20.144 SO libspdk_bdev_lvol.so.6.0 00:02:20.144 SO libspdk_bdev_virtio.so.6.0 00:02:20.144 SYMLINK libspdk_bdev_lvol.so 00:02:20.403 SYMLINK libspdk_bdev_virtio.so 00:02:20.403 LIB libspdk_bdev_raid.a 00:02:20.663 SO libspdk_bdev_raid.so.6.0 00:02:20.663 SYMLINK libspdk_bdev_raid.so 00:02:21.606 LIB libspdk_bdev_nvme.a 00:02:21.606 SO libspdk_bdev_nvme.so.7.0 00:02:21.606 SYMLINK libspdk_bdev_nvme.so 00:02:22.551 CC module/event/subsystems/vmd/vmd.o 00:02:22.551 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:22.551 CC module/event/subsystems/iobuf/iobuf.o 00:02:22.551 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:22.551 CC module/event/subsystems/scheduler/scheduler.o 00:02:22.551 CC module/event/subsystems/keyring/keyring.o 00:02:22.551 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:22.551 CC module/event/subsystems/sock/sock.o 00:02:22.551 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:22.551 LIB libspdk_event_vmd.a 00:02:22.551 LIB libspdk_event_vhost_blk.a 00:02:22.551 LIB libspdk_event_keyring.a 00:02:22.551 LIB libspdk_event_iobuf.a 00:02:22.551 LIB libspdk_event_scheduler.a 00:02:22.551 LIB libspdk_event_sock.a 00:02:22.551 LIB libspdk_event_vfu_tgt.a 00:02:22.551 SO libspdk_event_vmd.so.6.0 00:02:22.551 SO libspdk_event_vhost_blk.so.3.0 00:02:22.551 SO libspdk_event_keyring.so.1.0 00:02:22.551 SO libspdk_event_scheduler.so.4.0 00:02:22.551 SO libspdk_event_iobuf.so.3.0 00:02:22.551 SO libspdk_event_sock.so.5.0 00:02:22.551 SO libspdk_event_vfu_tgt.so.3.0 00:02:22.551 SYMLINK libspdk_event_vhost_blk.so 00:02:22.551 SYMLINK libspdk_event_scheduler.so 00:02:22.551 SYMLINK libspdk_event_keyring.so 00:02:22.551 SYMLINK libspdk_event_vmd.so 00:02:22.551 SYMLINK libspdk_event_sock.so 00:02:22.551 SYMLINK libspdk_event_vfu_tgt.so 00:02:22.551 SYMLINK libspdk_event_iobuf.so 00:02:23.122 CC module/event/subsystems/accel/accel.o 00:02:23.122 LIB libspdk_event_accel.a 00:02:23.122 SO libspdk_event_accel.so.6.0 00:02:23.384 SYMLINK libspdk_event_accel.so 00:02:23.644 CC module/event/subsystems/bdev/bdev.o 00:02:23.644 LIB libspdk_event_bdev.a 00:02:23.904 SO libspdk_event_bdev.so.6.0 00:02:23.904 SYMLINK libspdk_event_bdev.so 00:02:24.165 CC module/event/subsystems/ublk/ublk.o 00:02:24.165 CC module/event/subsystems/scsi/scsi.o 00:02:24.165 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.165 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.165 CC module/event/subsystems/nbd/nbd.o 00:02:24.426 LIB libspdk_event_ublk.a 00:02:24.426 LIB libspdk_event_nbd.a 00:02:24.426 LIB libspdk_event_scsi.a 00:02:24.426 SO libspdk_event_ublk.so.3.0 00:02:24.426 SO libspdk_event_nbd.so.6.0 00:02:24.426 SO libspdk_event_scsi.so.6.0 00:02:24.426 LIB libspdk_event_nvmf.a 00:02:24.426 SYMLINK libspdk_event_ublk.so 00:02:24.426 SO libspdk_event_nvmf.so.6.0 00:02:24.426 SYMLINK libspdk_event_nbd.so 00:02:24.426 SYMLINK libspdk_event_scsi.so 00:02:24.686 SYMLINK libspdk_event_nvmf.so 00:02:24.947 CC module/event/subsystems/iscsi/iscsi.o 00:02:24.947 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:24.947 LIB libspdk_event_vhost_scsi.a 00:02:24.947 LIB libspdk_event_iscsi.a 00:02:24.947 SO libspdk_event_vhost_scsi.so.3.0 00:02:25.207 SO libspdk_event_iscsi.so.6.0 00:02:25.207 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.207 SYMLINK libspdk_event_iscsi.so 00:02:25.468 SO libspdk.so.6.0 00:02:25.468 SYMLINK libspdk.so 00:02:25.727 CC app/spdk_nvme_perf/perf.o 00:02:25.727 CC app/trace_record/trace_record.o 00:02:25.727 CC app/spdk_nvme_identify/identify.o 00:02:25.727 CC app/spdk_lspci/spdk_lspci.o 00:02:25.727 CC test/rpc_client/rpc_client_test.o 00:02:25.727 TEST_HEADER include/spdk/accel.h 00:02:25.727 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.727 TEST_HEADER include/spdk/accel_module.h 00:02:25.727 TEST_HEADER include/spdk/barrier.h 00:02:25.727 TEST_HEADER include/spdk/assert.h 00:02:25.727 CXX app/trace/trace.o 00:02:25.727 TEST_HEADER include/spdk/base64.h 00:02:25.727 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.727 TEST_HEADER include/spdk/bdev_module.h 00:02:25.727 TEST_HEADER include/spdk/bdev.h 00:02:25.727 TEST_HEADER include/spdk/bit_array.h 00:02:25.727 CC app/spdk_top/spdk_top.o 00:02:25.727 TEST_HEADER include/spdk/bit_pool.h 00:02:25.727 CC app/nvmf_tgt/nvmf_main.o 00:02:25.727 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.727 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.727 TEST_HEADER include/spdk/blobfs.h 00:02:25.727 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.727 TEST_HEADER include/spdk/blob.h 00:02:25.727 TEST_HEADER include/spdk/config.h 00:02:25.727 CC app/vhost/vhost.o 00:02:25.727 TEST_HEADER include/spdk/crc32.h 00:02:25.727 TEST_HEADER include/spdk/crc16.h 00:02:25.727 TEST_HEADER include/spdk/crc64.h 00:02:25.727 TEST_HEADER include/spdk/cpuset.h 00:02:25.727 TEST_HEADER include/spdk/dif.h 00:02:25.727 TEST_HEADER include/spdk/dma.h 00:02:25.727 TEST_HEADER include/spdk/endian.h 00:02:25.727 TEST_HEADER include/spdk/env.h 00:02:25.727 TEST_HEADER include/spdk/conf.h 00:02:25.727 TEST_HEADER include/spdk/fd.h 00:02:25.727 TEST_HEADER include/spdk/file.h 00:02:25.727 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.727 TEST_HEADER include/spdk/ftl.h 00:02:25.727 TEST_HEADER include/spdk/event.h 00:02:25.727 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.727 TEST_HEADER include/spdk/hexlify.h 00:02:25.727 TEST_HEADER include/spdk/idxd.h 00:02:25.727 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.988 TEST_HEADER include/spdk/histogram_data.h 00:02:25.988 TEST_HEADER include/spdk/fd_group.h 00:02:25.988 TEST_HEADER include/spdk/ioat.h 00:02:25.988 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.988 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.988 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.988 TEST_HEADER include/spdk/json.h 00:02:25.988 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.988 TEST_HEADER include/spdk/init.h 00:02:25.988 TEST_HEADER include/spdk/keyring.h 00:02:25.988 TEST_HEADER include/spdk/likely.h 00:02:25.988 CC app/spdk_dd/spdk_dd.o 00:02:25.988 TEST_HEADER include/spdk/log.h 00:02:25.988 TEST_HEADER include/spdk/keyring_module.h 00:02:25.988 TEST_HEADER include/spdk/memory.h 00:02:25.988 TEST_HEADER include/spdk/lvol.h 00:02:25.988 TEST_HEADER include/spdk/nbd.h 00:02:25.988 TEST_HEADER include/spdk/mmio.h 00:02:25.988 TEST_HEADER include/spdk/nvme.h 00:02:25.988 TEST_HEADER include/spdk/notify.h 00:02:25.988 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.988 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.988 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.988 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.988 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.988 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.988 TEST_HEADER include/spdk/nvmf.h 00:02:25.988 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.988 CC app/spdk_tgt/spdk_tgt.o 00:02:25.988 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.988 TEST_HEADER include/spdk/opal.h 00:02:25.988 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.988 TEST_HEADER include/spdk/pci_ids.h 00:02:25.988 TEST_HEADER include/spdk/pipe.h 00:02:25.988 TEST_HEADER include/spdk/queue.h 00:02:25.988 TEST_HEADER include/spdk/opal_spec.h 00:02:25.988 TEST_HEADER include/spdk/rpc.h 00:02:25.988 TEST_HEADER include/spdk/scheduler.h 00:02:25.988 TEST_HEADER include/spdk/scsi.h 00:02:25.988 TEST_HEADER include/spdk/reduce.h 00:02:25.988 TEST_HEADER include/spdk/sock.h 00:02:25.988 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.988 TEST_HEADER include/spdk/stdinc.h 00:02:25.988 TEST_HEADER include/spdk/string.h 00:02:25.988 TEST_HEADER include/spdk/thread.h 00:02:25.988 TEST_HEADER include/spdk/trace.h 00:02:25.988 TEST_HEADER include/spdk/tree.h 00:02:25.988 TEST_HEADER include/spdk/trace_parser.h 00:02:25.988 TEST_HEADER include/spdk/util.h 00:02:25.988 CC examples/thread/thread/thread_ex.o 00:02:25.988 CC examples/nvme/arbitration/arbitration.o 00:02:25.988 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.988 TEST_HEADER include/spdk/ublk.h 00:02:25.988 TEST_HEADER include/spdk/uuid.h 00:02:25.988 TEST_HEADER include/spdk/version.h 00:02:25.988 TEST_HEADER include/spdk/vhost.h 00:02:25.988 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:25.988 CC examples/nvme/hotplug/hotplug.o 00:02:25.988 TEST_HEADER include/spdk/vmd.h 00:02:25.988 CC examples/nvme/reconnect/reconnect.o 00:02:25.988 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.988 TEST_HEADER include/spdk/zipf.h 00:02:25.988 CC examples/ioat/verify/verify.o 00:02:25.988 CXX test/cpp_headers/accel_module.o 00:02:25.988 CXX test/cpp_headers/assert.o 00:02:25.988 CC test/thread/poller_perf/poller_perf.o 00:02:25.988 CC test/event/event_perf/event_perf.o 00:02:25.988 TEST_HEADER include/spdk/xor.h 00:02:25.988 CXX test/cpp_headers/accel.o 00:02:25.988 CXX test/cpp_headers/barrier.o 00:02:25.988 CC test/nvme/aer/aer.o 00:02:25.988 CC examples/bdev/bdevperf/bdevperf.o 00:02:25.988 CXX test/cpp_headers/bdev_module.o 00:02:25.988 CC test/app/stub/stub.o 00:02:25.988 CC examples/bdev/hello_world/hello_bdev.o 00:02:25.988 CXX test/cpp_headers/base64.o 00:02:25.988 CXX test/cpp_headers/bdev.o 00:02:25.988 CXX test/cpp_headers/bit_array.o 00:02:25.988 CXX test/cpp_headers/blob_bdev.o 00:02:25.988 CXX test/cpp_headers/bdev_zone.o 00:02:25.988 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.988 CXX test/cpp_headers/blobfs.o 00:02:25.988 CXX test/cpp_headers/blob.o 00:02:25.988 CXX test/cpp_headers/conf.o 00:02:25.988 CXX test/cpp_headers/bit_pool.o 00:02:25.988 CC examples/vmd/led/led.o 00:02:25.988 CXX test/cpp_headers/config.o 00:02:25.988 CC examples/sock/hello_world/hello_sock.o 00:02:25.988 CXX test/cpp_headers/crc16.o 00:02:25.988 CXX test/cpp_headers/crc32.o 00:02:25.988 CC test/nvme/reserve/reserve.o 00:02:25.988 CXX test/cpp_headers/crc64.o 00:02:25.988 CXX test/cpp_headers/dif.o 00:02:25.988 CC test/env/vtophys/vtophys.o 00:02:25.988 CXX test/cpp_headers/endian.o 00:02:25.988 CXX test/cpp_headers/cpuset.o 00:02:25.988 CXX test/cpp_headers/env_dpdk.o 00:02:25.988 CXX test/cpp_headers/env.o 00:02:25.988 CXX test/cpp_headers/file.o 00:02:25.988 CXX test/cpp_headers/fd_group.o 00:02:25.988 CXX test/cpp_headers/event.o 00:02:25.988 CXX test/cpp_headers/dma.o 00:02:25.988 CXX test/cpp_headers/fd.o 00:02:25.988 CXX test/cpp_headers/ftl.o 00:02:25.988 CC test/nvme/e2edp/nvme_dp.o 00:02:25.988 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:25.988 CXX test/cpp_headers/gpt_spec.o 00:02:25.988 CXX test/cpp_headers/histogram_data.o 00:02:25.988 CXX test/cpp_headers/idxd.o 00:02:25.988 CXX test/cpp_headers/idxd_spec.o 00:02:25.988 CC test/nvme/boot_partition/boot_partition.o 00:02:25.988 CXX test/cpp_headers/init.o 00:02:25.988 CXX test/cpp_headers/ioat.o 00:02:25.988 CXX test/cpp_headers/ioat_spec.o 00:02:25.988 CC test/nvme/cuse/cuse.o 00:02:25.988 CC examples/ioat/perf/perf.o 00:02:25.988 CC test/app/bdev_svc/bdev_svc.o 00:02:25.988 CC test/nvme/overhead/overhead.o 00:02:25.988 CC examples/nvme/hello_world/hello_world.o 00:02:25.988 CXX test/cpp_headers/jsonrpc.o 00:02:25.988 CXX test/cpp_headers/hexlify.o 00:02:25.988 CC examples/util/zipf/zipf.o 00:02:25.988 CXX test/cpp_headers/keyring_module.o 00:02:25.988 CXX test/cpp_headers/likely.o 00:02:25.988 CC test/accel/dif/dif.o 00:02:25.988 CXX test/cpp_headers/memory.o 00:02:25.988 CXX test/cpp_headers/mmio.o 00:02:25.988 CXX test/cpp_headers/json.o 00:02:25.988 CXX test/cpp_headers/keyring.o 00:02:25.988 CXX test/cpp_headers/iscsi_spec.o 00:02:25.988 CC test/nvme/connect_stress/connect_stress.o 00:02:25.989 CC test/event/reactor_perf/reactor_perf.o 00:02:25.989 CXX test/cpp_headers/notify.o 00:02:25.989 CXX test/cpp_headers/nvme.o 00:02:25.989 CXX test/cpp_headers/log.o 00:02:25.989 CXX test/cpp_headers/lvol.o 00:02:25.989 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.989 CXX test/cpp_headers/nvme_spec.o 00:02:25.989 CXX test/cpp_headers/nvme_zns.o 00:02:25.989 CC examples/nvmf/nvmf/nvmf.o 00:02:25.989 CXX test/cpp_headers/nbd.o 00:02:25.989 CXX test/cpp_headers/nvmf.o 00:02:25.989 CXX test/cpp_headers/nvmf_spec.o 00:02:25.989 CXX test/cpp_headers/nvme_intel.o 00:02:25.989 CXX test/cpp_headers/nvmf_transport.o 00:02:25.989 CC test/app/histogram_perf/histogram_perf.o 00:02:25.989 CXX test/cpp_headers/opal.o 00:02:25.989 CXX test/cpp_headers/opal_spec.o 00:02:25.989 CC app/fio/nvme/fio_plugin.o 00:02:25.989 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.989 CC test/nvme/startup/startup.o 00:02:25.989 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.989 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.989 CC test/nvme/reset/reset.o 00:02:25.989 CC examples/blob/cli/blobcli.o 00:02:25.989 CC examples/nvme/abort/abort.o 00:02:25.989 CXX test/cpp_headers/pci_ids.o 00:02:25.989 CXX test/cpp_headers/pipe.o 00:02:25.989 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:25.989 CC test/nvme/compliance/nvme_compliance.o 00:02:25.989 CXX test/cpp_headers/queue.o 00:02:25.989 CC examples/idxd/perf/perf.o 00:02:25.989 CXX test/cpp_headers/reduce.o 00:02:25.989 CXX test/cpp_headers/rpc.o 00:02:25.989 CC app/fio/bdev/fio_plugin.o 00:02:25.989 CC test/bdev/bdevio/bdevio.o 00:02:26.250 CC test/app/jsoncat/jsoncat.o 00:02:26.250 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.250 CC test/event/app_repeat/app_repeat.o 00:02:26.250 CC examples/accel/perf/accel_perf.o 00:02:26.250 CC examples/blob/hello_world/hello_blob.o 00:02:26.250 CC test/event/reactor/reactor.o 00:02:26.250 CC test/nvme/fused_ordering/fused_ordering.o 00:02:26.250 CC test/nvme/err_injection/err_injection.o 00:02:26.250 CC test/env/memory/memory_ut.o 00:02:26.250 CC test/event/scheduler/scheduler.o 00:02:26.250 CC test/dma/test_dma/test_dma.o 00:02:26.250 CC test/env/pci/pci_ut.o 00:02:26.250 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:26.250 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.250 CC test/blobfs/mkfs/mkfs.o 00:02:26.250 CC test/nvme/simple_copy/simple_copy.o 00:02:26.250 CXX test/cpp_headers/scheduler.o 00:02:26.250 LINK spdk_nvme_discover 00:02:26.250 CC test/nvme/fdp/fdp.o 00:02:26.250 CC test/nvme/sgl/sgl.o 00:02:26.250 LINK spdk_trace_record 00:02:26.511 LINK nvmf_tgt 00:02:26.511 LINK spdk_tgt 00:02:26.511 LINK vhost 00:02:26.511 LINK event_perf 00:02:26.511 LINK led 00:02:26.511 CXX test/cpp_headers/scsi.o 00:02:26.511 LINK stub 00:02:26.511 LINK zipf 00:02:26.511 CXX test/cpp_headers/scsi_spec.o 00:02:26.511 LINK reserve 00:02:26.511 LINK reactor_perf 00:02:26.511 LINK verify 00:02:26.511 LINK pmr_persistence 00:02:26.511 CXX test/cpp_headers/sock.o 00:02:26.511 LINK thread 00:02:26.511 CXX test/cpp_headers/thread.o 00:02:26.511 LINK hotplug 00:02:26.511 CXX test/cpp_headers/stdinc.o 00:02:26.511 CXX test/cpp_headers/trace.o 00:02:26.511 CXX test/cpp_headers/string.o 00:02:26.511 LINK startup 00:02:26.511 LINK hello_sock 00:02:26.511 LINK hello_bdev 00:02:26.511 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.511 LINK jsoncat 00:02:26.511 CXX test/cpp_headers/trace_parser.o 00:02:26.511 CXX test/cpp_headers/tree.o 00:02:26.511 CXX test/cpp_headers/ublk.o 00:02:26.511 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.511 CXX test/cpp_headers/util.o 00:02:26.511 LINK cmb_copy 00:02:26.511 CXX test/cpp_headers/uuid.o 00:02:26.511 LINK app_repeat 00:02:26.511 CC test/lvol/esnap/esnap.o 00:02:26.769 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.769 CXX test/cpp_headers/version.o 00:02:26.769 LINK connect_stress 00:02:26.769 LINK spdk_dd 00:02:26.769 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.769 CXX test/cpp_headers/vhost.o 00:02:26.769 CXX test/cpp_headers/vmd.o 00:02:26.769 CXX test/cpp_headers/xor.o 00:02:26.769 CXX test/cpp_headers/zipf.o 00:02:26.769 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.769 LINK spdk_trace 00:02:26.769 LINK spdk_lspci 00:02:26.769 LINK nvme_dp 00:02:26.769 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.769 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.769 LINK nvmf 00:02:26.769 LINK reset 00:02:26.769 LINK overhead 00:02:26.769 LINK err_injection 00:02:26.769 LINK hello_blob 00:02:26.769 LINK sgl 00:02:26.769 LINK rpc_client_test 00:02:27.026 LINK abort 00:02:27.026 LINK interrupt_tgt 00:02:27.026 LINK nvme_manage 00:02:27.026 LINK test_dma 00:02:27.026 LINK bdev_svc 00:02:27.026 LINK poller_perf 00:02:27.026 LINK iscsi_tgt 00:02:27.026 LINK histogram_perf 00:02:27.026 LINK spdk_nvme_perf 00:02:27.026 LINK spdk_bdev 00:02:27.026 LINK vtophys 00:02:27.026 LINK reactor 00:02:27.026 LINK env_dpdk_post_init 00:02:27.026 LINK boot_partition 00:02:27.026 LINK fused_ordering 00:02:27.026 LINK scheduler 00:02:27.026 LINK bdevio 00:02:27.026 LINK lsvmd 00:02:27.026 LINK ioat_perf 00:02:27.026 LINK spdk_nvme_identify 00:02:27.026 LINK mkfs 00:02:27.026 LINK hello_world 00:02:27.286 LINK doorbell_aers 00:02:27.286 LINK simple_copy 00:02:27.286 LINK aer 00:02:27.286 LINK arbitration 00:02:27.286 LINK nvme_compliance 00:02:27.286 LINK spdk_top 00:02:27.286 LINK bdevperf 00:02:27.286 LINK memory_ut 00:02:27.286 LINK pci_ut 00:02:27.286 LINK mem_callbacks 00:02:27.286 LINK reconnect 00:02:27.286 LINK idxd_perf 00:02:27.286 LINK fdp 00:02:27.286 LINK vhost_fuzz 00:02:27.286 LINK dif 00:02:27.286 LINK nvme_fuzz 00:02:27.286 LINK spdk_nvme 00:02:27.547 LINK blobcli 00:02:27.547 LINK accel_perf 00:02:27.808 LINK cuse 00:02:28.381 LINK iscsi_fuzz 00:02:30.318 LINK esnap 00:02:30.891 00:02:30.891 real 0m35.081s 00:02:30.891 user 5m15.028s 00:02:30.891 sys 3m29.838s 00:02:30.891 14:04:54 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:30.891 14:04:54 make -- common/autotest_common.sh@10 -- $ set +x 00:02:30.891 ************************************ 00:02:30.891 END TEST make 00:02:30.891 ************************************ 00:02:30.891 14:04:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:30.891 14:04:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:30.891 14:04:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:30.891 14:04:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.891 14:04:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:30.891 14:04:54 -- pm/common@44 -- $ pid=149424 00:02:30.891 14:04:54 -- pm/common@50 -- $ kill -TERM 149424 00:02:30.891 14:04:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.891 14:04:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:30.891 14:04:54 -- pm/common@44 -- $ pid=149425 00:02:30.891 14:04:54 -- pm/common@50 -- $ kill -TERM 149425 00:02:30.891 14:04:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.891 14:04:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:30.891 14:04:54 -- pm/common@44 -- $ pid=149428 00:02:30.891 14:04:54 -- pm/common@50 -- $ kill -TERM 149428 00:02:30.891 14:04:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.891 14:04:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:30.891 14:04:54 -- pm/common@44 -- $ pid=149443 00:02:30.891 14:04:54 -- pm/common@50 -- $ sudo -E kill -TERM 149443 00:02:30.891 14:04:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:30.891 14:04:54 -- nvmf/common.sh@7 -- # uname -s 00:02:30.891 14:04:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.891 14:04:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.891 14:04:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.891 14:04:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.891 14:04:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.891 14:04:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.891 14:04:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.891 14:04:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.891 14:04:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.891 14:04:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.891 14:04:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:30.891 14:04:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:30.891 14:04:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.891 14:04:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.891 14:04:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:30.891 14:04:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:30.891 14:04:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.891 14:04:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.891 14:04:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.891 14:04:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.892 14:04:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.892 14:04:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.892 14:04:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.892 14:04:54 -- paths/export.sh@5 -- # export PATH 00:02:30.892 14:04:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.892 14:04:54 -- nvmf/common.sh@47 -- # : 0 00:02:30.892 14:04:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:30.892 14:04:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:30.892 14:04:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:30.892 14:04:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.892 14:04:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.892 14:04:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:30.892 14:04:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:30.892 14:04:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:30.892 14:04:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.892 14:04:54 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.892 14:04:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.892 14:04:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.892 14:04:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.892 14:04:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.892 14:04:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.892 14:04:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.892 14:04:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.892 14:04:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.892 14:04:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.892 14:04:54 -- spdk/autotest.sh@48 -- # udevadm_pid=225109 00:02:30.892 14:04:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:30.892 14:04:54 -- pm/common@17 -- # local monitor 00:02:30.892 14:04:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.892 14:04:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.892 14:04:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.892 14:04:54 -- pm/common@21 -- # date +%s 00:02:30.892 14:04:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.892 14:04:54 -- pm/common@21 -- # date +%s 00:02:30.892 14:04:54 -- pm/common@25 -- # sleep 1 00:02:30.892 14:04:54 -- pm/common@21 -- # date +%s 00:02:30.892 14:04:54 -- pm/common@21 -- # date +%s 00:02:30.892 14:04:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717761894 00:02:30.892 14:04:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717761894 00:02:30.892 14:04:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717761894 00:02:30.892 14:04:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717761894 00:02:30.892 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717761894_collect-vmstat.pm.log 00:02:31.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717761894_collect-cpu-load.pm.log 00:02:31.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717761894_collect-cpu-temp.pm.log 00:02:31.154 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717761894_collect-bmc-pm.bmc.pm.log 00:02:32.098 14:04:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.098 14:04:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.098 14:04:55 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:32.098 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:02:32.098 14:04:55 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.098 14:04:55 -- common/autotest_common.sh@747 -- # xtrace_disable 00:02:32.098 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:02:32.098 14:04:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.098 14:04:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.098 14:04:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.098 14:04:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.098 14:04:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.098 14:04:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.098 14:04:55 -- common/autotest_common.sh@1454 -- # uname 00:02:32.098 14:04:55 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:02:32.098 14:04:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.098 14:04:55 -- common/autotest_common.sh@1474 -- # uname 00:02:32.098 14:04:55 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:02:32.098 14:04:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:32.098 14:04:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:32.098 14:04:55 -- spdk/autotest.sh@72 -- # hash lcov 00:02:32.098 14:04:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:32.098 14:04:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:32.098 --rc lcov_branch_coverage=1 00:02:32.098 --rc lcov_function_coverage=1 00:02:32.098 --rc genhtml_branch_coverage=1 00:02:32.098 --rc genhtml_function_coverage=1 00:02:32.098 --rc genhtml_legend=1 00:02:32.098 --rc geninfo_all_blocks=1 00:02:32.098 ' 00:02:32.098 14:04:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:32.098 --rc lcov_branch_coverage=1 00:02:32.098 --rc lcov_function_coverage=1 00:02:32.098 --rc genhtml_branch_coverage=1 00:02:32.098 --rc genhtml_function_coverage=1 00:02:32.098 --rc genhtml_legend=1 00:02:32.098 --rc geninfo_all_blocks=1 00:02:32.098 ' 00:02:32.098 14:04:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:32.098 --rc lcov_branch_coverage=1 00:02:32.098 --rc lcov_function_coverage=1 00:02:32.098 --rc genhtml_branch_coverage=1 00:02:32.098 --rc genhtml_function_coverage=1 00:02:32.098 --rc genhtml_legend=1 00:02:32.098 --rc geninfo_all_blocks=1 00:02:32.098 --no-external' 00:02:32.098 14:04:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:32.098 --rc lcov_branch_coverage=1 00:02:32.098 --rc lcov_function_coverage=1 00:02:32.098 --rc genhtml_branch_coverage=1 00:02:32.098 --rc genhtml_function_coverage=1 00:02:32.098 --rc genhtml_legend=1 00:02:32.098 --rc geninfo_all_blocks=1 00:02:32.098 --no-external' 00:02:32.098 14:04:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:32.098 lcov: LCOV version 1.14 00:02:32.098 14:04:55 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:44.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:59.244 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:59.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:59.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:59.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:59.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:59.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:59.245 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:59.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:59.505 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:59.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:59.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:59.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:59.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:59.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:59.506 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:59.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:59.766 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:59.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:59.766 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:01.681 14:05:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:01.681 14:05:24 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:01.681 14:05:24 -- common/autotest_common.sh@10 -- # set +x 00:03:01.681 14:05:24 -- spdk/autotest.sh@91 -- # rm -f 00:03:01.681 14:05:24 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.887 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.887 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:05.888 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.888 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.888 14:05:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:05.888 14:05:29 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:05.888 14:05:29 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:05.888 14:05:29 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:05.888 14:05:29 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:05.888 14:05:29 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:05.888 14:05:29 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:05.888 14:05:29 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.888 14:05:29 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:05.888 14:05:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:05.888 14:05:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:05.888 14:05:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:05.888 14:05:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:05.888 14:05:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:05.888 14:05:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:05.888 No valid GPT data, bailing 00:03:05.888 14:05:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.888 14:05:29 -- scripts/common.sh@391 -- # pt= 00:03:05.888 14:05:29 -- scripts/common.sh@392 -- # return 1 00:03:05.888 14:05:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:05.888 1+0 records in 00:03:05.888 1+0 records out 00:03:05.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463487 s, 226 MB/s 00:03:05.888 14:05:29 -- spdk/autotest.sh@118 -- # sync 00:03:05.888 14:05:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:05.888 14:05:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:05.888 14:05:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:14.062 14:05:37 -- spdk/autotest.sh@124 -- # uname -s 00:03:14.062 14:05:37 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:14.062 14:05:37 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.062 14:05:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:14.062 14:05:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:14.062 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:03:14.062 ************************************ 00:03:14.062 START TEST setup.sh 00:03:14.062 ************************************ 00:03:14.062 14:05:37 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.062 * Looking for test storage... 00:03:14.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.062 14:05:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:14.062 14:05:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:14.062 14:05:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:14.062 14:05:37 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:14.062 14:05:37 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:14.062 14:05:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.062 ************************************ 00:03:14.062 START TEST acl 00:03:14.062 ************************************ 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:14.062 * Looking for test storage... 00:03:14.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.062 14:05:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:14.062 14:05:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:14.062 14:05:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.062 14:05:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.267 14:05:41 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:18.267 14:05:41 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:18.267 14:05:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.267 14:05:41 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:18.267 14:05:41 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.267 14:05:41 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:22.470 Hugepages 00:03:22.470 node hugesize free / total 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 00:03:22.470 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:22.470 14:05:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:22.470 14:05:45 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:22.470 14:05:45 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:22.470 14:05:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:22.470 ************************************ 00:03:22.470 START TEST denied 00:03:22.470 ************************************ 00:03:22.470 14:05:45 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:03:22.470 14:05:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:22.470 14:05:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:22.470 14:05:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:22.470 14:05:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.470 14:05:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.674 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.674 14:05:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.886 00:03:30.886 real 0m8.910s 00:03:30.886 user 0m3.019s 00:03:30.886 sys 0m5.243s 00:03:30.886 14:05:54 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.886 14:05:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:30.886 ************************************ 00:03:30.886 END TEST denied 00:03:30.886 ************************************ 00:03:31.150 14:05:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.150 14:05:54 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:31.150 14:05:54 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.150 14:05:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.150 ************************************ 00:03:31.150 START TEST allowed 00:03:31.150 ************************************ 00:03:31.150 14:05:54 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:03:31.150 14:05:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:31.150 14:05:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.150 14:05:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.150 14:05:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:31.150 14:05:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.734 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:37.734 14:06:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:37.734 14:06:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:37.734 14:06:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:37.734 14:06:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.734 14:06:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.115 00:03:41.115 real 0m9.814s 00:03:41.115 user 0m2.949s 00:03:41.115 sys 0m5.183s 00:03:41.115 14:06:04 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:41.115 14:06:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:41.115 ************************************ 00:03:41.115 END TEST allowed 00:03:41.115 ************************************ 00:03:41.115 00:03:41.115 real 0m27.180s 00:03:41.115 user 0m9.067s 00:03:41.115 sys 0m15.998s 00:03:41.115 14:06:04 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:41.115 14:06:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.115 ************************************ 00:03:41.115 END TEST acl 00:03:41.115 ************************************ 00:03:41.116 14:06:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:41.116 14:06:04 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:41.116 14:06:04 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:41.116 14:06:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:41.116 ************************************ 00:03:41.116 START TEST hugepages 00:03:41.116 ************************************ 00:03:41.116 14:06:04 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:41.116 * Looking for test storage... 00:03:41.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 105684976 kB' 'MemAvailable: 109931948 kB' 'Buffers: 2704 kB' 'Cached: 11936096 kB' 'SwapCached: 0 kB' 'Active: 7969568 kB' 'Inactive: 4478944 kB' 'Active(anon): 7574208 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513148 kB' 'Mapped: 202672 kB' 'Shmem: 7064496 kB' 'KReclaimable: 394000 kB' 'Slab: 1111144 kB' 'SReclaimable: 394000 kB' 'SUnreclaim: 717144 kB' 'KernelStack: 27360 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460856 kB' 'Committed_AS: 8980008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236704 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.116 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:41.117 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:41.118 14:06:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:41.118 14:06:04 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:41.118 14:06:04 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:41.118 14:06:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.118 ************************************ 00:03:41.118 START TEST default_setup 00:03:41.118 ************************************ 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.118 14:06:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.337 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.337 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.337 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107908916 kB' 'MemAvailable: 112156376 kB' 'Buffers: 2704 kB' 'Cached: 11936216 kB' 'SwapCached: 0 kB' 'Active: 7980516 kB' 'Inactive: 4478944 kB' 'Active(anon): 7585156 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523604 kB' 'Mapped: 202128 kB' 'Shmem: 7064616 kB' 'KReclaimable: 393968 kB' 'Slab: 1108848 kB' 'SReclaimable: 393968 kB' 'SUnreclaim: 714880 kB' 'KernelStack: 27488 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8994308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236732 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.338 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107912040 kB' 'MemAvailable: 112158996 kB' 'Buffers: 2704 kB' 'Cached: 11936220 kB' 'SwapCached: 0 kB' 'Active: 7980788 kB' 'Inactive: 4478944 kB' 'Active(anon): 7585428 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523932 kB' 'Mapped: 202188 kB' 'Shmem: 7064620 kB' 'KReclaimable: 393968 kB' 'Slab: 1108844 kB' 'SReclaimable: 393968 kB' 'SUnreclaim: 714876 kB' 'KernelStack: 27488 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8991552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.339 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.340 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107911756 kB' 'MemAvailable: 112158712 kB' 'Buffers: 2704 kB' 'Cached: 11936252 kB' 'SwapCached: 0 kB' 'Active: 7979832 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584472 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523548 kB' 'Mapped: 202100 kB' 'Shmem: 7064652 kB' 'KReclaimable: 393968 kB' 'Slab: 1108800 kB' 'SReclaimable: 393968 kB' 'SUnreclaim: 714832 kB' 'KernelStack: 27520 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 9011944 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236732 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.341 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.342 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.343 nr_hugepages=1024 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.343 resv_hugepages=0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.343 surplus_hugepages=0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.343 anon_hugepages=0 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107911844 kB' 'MemAvailable: 112158800 kB' 'Buffers: 2704 kB' 'Cached: 11936276 kB' 'SwapCached: 0 kB' 'Active: 7980160 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584800 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523800 kB' 'Mapped: 202100 kB' 'Shmem: 7064676 kB' 'KReclaimable: 393968 kB' 'Slab: 1108796 kB' 'SReclaimable: 393968 kB' 'SUnreclaim: 714828 kB' 'KernelStack: 27424 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8991968 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236732 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.343 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.344 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59000240 kB' 'MemUsed: 6658768 kB' 'SwapCached: 0 kB' 'Active: 2350064 kB' 'Inactive: 141672 kB' 'Active(anon): 2106772 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306044 kB' 'Mapped: 87240 kB' 'AnonPages: 189080 kB' 'Shmem: 1921080 kB' 'KernelStack: 13592 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478832 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.345 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.346 node0=1024 expecting 1024 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.346 00:03:45.346 real 0m4.199s 00:03:45.346 user 0m1.686s 00:03:45.346 sys 0m2.496s 00:03:45.346 14:06:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:45.347 14:06:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:45.347 ************************************ 00:03:45.347 END TEST default_setup 00:03:45.347 ************************************ 00:03:45.347 14:06:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:45.347 14:06:08 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:45.347 14:06:08 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:45.347 14:06:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.347 ************************************ 00:03:45.347 START TEST per_node_1G_alloc 00:03:45.347 ************************************ 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.347 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:45.609 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:45.609 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:45.609 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.609 14:06:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.825 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:49.825 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107915428 kB' 'MemAvailable: 112162308 kB' 'Buffers: 2704 kB' 'Cached: 11936396 kB' 'SwapCached: 0 kB' 'Active: 7974972 kB' 'Inactive: 4478944 kB' 'Active(anon): 7579612 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518032 kB' 'Mapped: 201072 kB' 'Shmem: 7064796 kB' 'KReclaimable: 393816 kB' 'Slab: 1107896 kB' 'SReclaimable: 393816 kB' 'SUnreclaim: 714080 kB' 'KernelStack: 27216 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8978144 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236700 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.825 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.826 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107915748 kB' 'MemAvailable: 112162628 kB' 'Buffers: 2704 kB' 'Cached: 11936400 kB' 'SwapCached: 0 kB' 'Active: 7975344 kB' 'Inactive: 4478944 kB' 'Active(anon): 7579984 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518464 kB' 'Mapped: 201072 kB' 'Shmem: 7064800 kB' 'KReclaimable: 393816 kB' 'Slab: 1107896 kB' 'SReclaimable: 393816 kB' 'SUnreclaim: 714080 kB' 'KernelStack: 27168 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8978160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236780 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.827 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.828 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107917088 kB' 'MemAvailable: 112163968 kB' 'Buffers: 2704 kB' 'Cached: 11936420 kB' 'SwapCached: 0 kB' 'Active: 7974952 kB' 'Inactive: 4478944 kB' 'Active(anon): 7579592 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517972 kB' 'Mapped: 201052 kB' 'Shmem: 7064820 kB' 'KReclaimable: 393816 kB' 'Slab: 1107944 kB' 'SReclaimable: 393816 kB' 'SUnreclaim: 714128 kB' 'KernelStack: 27040 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8975320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236620 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.829 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.830 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.831 nr_hugepages=1024 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.831 resv_hugepages=0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.831 surplus_hugepages=0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.831 anon_hugepages=0 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107917164 kB' 'MemAvailable: 112164024 kB' 'Buffers: 2704 kB' 'Cached: 11936440 kB' 'SwapCached: 0 kB' 'Active: 7974476 kB' 'Inactive: 4478944 kB' 'Active(anon): 7579116 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517520 kB' 'Mapped: 201032 kB' 'Shmem: 7064840 kB' 'KReclaimable: 393776 kB' 'Slab: 1107936 kB' 'SReclaimable: 393776 kB' 'SUnreclaim: 714160 kB' 'KernelStack: 27152 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8975340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236620 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.831 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.832 14:06:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.832 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60072148 kB' 'MemUsed: 5586860 kB' 'SwapCached: 0 kB' 'Active: 2347144 kB' 'Inactive: 141672 kB' 'Active(anon): 2103852 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306096 kB' 'Mapped: 86976 kB' 'AnonPages: 185888 kB' 'Shmem: 1921132 kB' 'KernelStack: 13208 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478644 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.833 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47845244 kB' 'MemUsed: 12834560 kB' 'SwapCached: 0 kB' 'Active: 5627448 kB' 'Inactive: 4337272 kB' 'Active(anon): 5475380 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337272 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9633092 kB' 'Mapped: 114056 kB' 'AnonPages: 331696 kB' 'Shmem: 5143752 kB' 'KernelStack: 13944 kB' 'PageTables: 4764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235936 kB' 'Slab: 629292 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 393356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.834 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.835 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.836 node0=512 expecting 512 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:49.836 node1=512 expecting 512 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:49.836 00:03:49.836 real 0m4.093s 00:03:49.836 user 0m1.600s 00:03:49.836 sys 0m2.558s 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:49.836 14:06:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.836 ************************************ 00:03:49.836 END TEST per_node_1G_alloc 00:03:49.836 ************************************ 00:03:49.836 14:06:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:49.836 14:06:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:49.836 14:06:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:49.836 14:06:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.836 ************************************ 00:03:49.836 START TEST even_2G_alloc 00:03:49.836 ************************************ 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.836 14:06:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.053 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.053 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107928496 kB' 'MemAvailable: 112175356 kB' 'Buffers: 2704 kB' 'Cached: 11936576 kB' 'SwapCached: 0 kB' 'Active: 7976188 kB' 'Inactive: 4478944 kB' 'Active(anon): 7580828 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518708 kB' 'Mapped: 201156 kB' 'Shmem: 7064976 kB' 'KReclaimable: 393776 kB' 'Slab: 1107656 kB' 'SReclaimable: 393776 kB' 'SUnreclaim: 713880 kB' 'KernelStack: 27152 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8976028 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236684 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107928928 kB' 'MemAvailable: 112175788 kB' 'Buffers: 2704 kB' 'Cached: 11936580 kB' 'SwapCached: 0 kB' 'Active: 7975816 kB' 'Inactive: 4478944 kB' 'Active(anon): 7580456 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518396 kB' 'Mapped: 201128 kB' 'Shmem: 7064980 kB' 'KReclaimable: 393776 kB' 'Slab: 1107624 kB' 'SReclaimable: 393776 kB' 'SUnreclaim: 713848 kB' 'KernelStack: 27168 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8976044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.056 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107931952 kB' 'MemAvailable: 112178812 kB' 'Buffers: 2704 kB' 'Cached: 11936580 kB' 'SwapCached: 0 kB' 'Active: 7975392 kB' 'Inactive: 4478944 kB' 'Active(anon): 7580032 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518400 kB' 'Mapped: 201052 kB' 'Shmem: 7064980 kB' 'KReclaimable: 393776 kB' 'Slab: 1107612 kB' 'SReclaimable: 393776 kB' 'SUnreclaim: 713836 kB' 'KernelStack: 27184 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8976064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.059 nr_hugepages=1024 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.059 resv_hugepages=0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.059 surplus_hugepages=0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.059 anon_hugepages=0 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107933492 kB' 'MemAvailable: 112180352 kB' 'Buffers: 2704 kB' 'Cached: 11936620 kB' 'SwapCached: 0 kB' 'Active: 7976132 kB' 'Inactive: 4478944 kB' 'Active(anon): 7580772 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518596 kB' 'Mapped: 201052 kB' 'Shmem: 7065020 kB' 'KReclaimable: 393776 kB' 'Slab: 1107596 kB' 'SReclaimable: 393776 kB' 'SUnreclaim: 713820 kB' 'KernelStack: 27184 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8975720 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60089732 kB' 'MemUsed: 5569276 kB' 'SwapCached: 0 kB' 'Active: 2348744 kB' 'Inactive: 141672 kB' 'Active(anon): 2105452 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306200 kB' 'Mapped: 86976 kB' 'AnonPages: 186992 kB' 'Shmem: 1921236 kB' 'KernelStack: 13208 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478392 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47844676 kB' 'MemUsed: 12835128 kB' 'SwapCached: 0 kB' 'Active: 5627036 kB' 'Inactive: 4337272 kB' 'Active(anon): 5474968 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337272 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9633144 kB' 'Mapped: 114076 kB' 'AnonPages: 331220 kB' 'Shmem: 5143804 kB' 'KernelStack: 13928 kB' 'PageTables: 4708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235936 kB' 'Slab: 629200 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 393264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.062 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.063 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.064 node0=512 expecting 512 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.064 node1=512 expecting 512 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.064 00:03:54.064 real 0m4.031s 00:03:54.064 user 0m1.674s 00:03:54.064 sys 0m2.423s 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:54.064 14:06:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.064 ************************************ 00:03:54.064 END TEST even_2G_alloc 00:03:54.064 ************************************ 00:03:54.064 14:06:17 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.064 14:06:17 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:54.064 14:06:17 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:54.064 14:06:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.064 ************************************ 00:03:54.064 START TEST odd_alloc 00:03:54.064 ************************************ 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.064 14:06:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.369 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:57.369 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:57.369 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:57.636 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.636 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107948108 kB' 'MemAvailable: 112194964 kB' 'Buffers: 2704 kB' 'Cached: 11936772 kB' 'SwapCached: 0 kB' 'Active: 7978020 kB' 'Inactive: 4478944 kB' 'Active(anon): 7582660 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520784 kB' 'Mapped: 201132 kB' 'Shmem: 7065172 kB' 'KReclaimable: 393768 kB' 'Slab: 1107192 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713424 kB' 'KernelStack: 27360 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8979992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236844 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.637 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107956012 kB' 'MemAvailable: 112202868 kB' 'Buffers: 2704 kB' 'Cached: 11936776 kB' 'SwapCached: 0 kB' 'Active: 7977828 kB' 'Inactive: 4478944 kB' 'Active(anon): 7582468 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520652 kB' 'Mapped: 201108 kB' 'Shmem: 7065176 kB' 'KReclaimable: 393768 kB' 'Slab: 1107252 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713484 kB' 'KernelStack: 27328 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8980008 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236764 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.638 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.639 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107955140 kB' 'MemAvailable: 112201996 kB' 'Buffers: 2704 kB' 'Cached: 11936792 kB' 'SwapCached: 0 kB' 'Active: 7978756 kB' 'Inactive: 4478944 kB' 'Active(anon): 7583396 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521536 kB' 'Mapped: 201612 kB' 'Shmem: 7065192 kB' 'KReclaimable: 393768 kB' 'Slab: 1107220 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713452 kB' 'KernelStack: 27328 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8982044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236780 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.640 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.641 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:57.642 nr_hugepages=1025 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.642 resv_hugepages=0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.642 surplus_hugepages=0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.642 anon_hugepages=0 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107954432 kB' 'MemAvailable: 112201288 kB' 'Buffers: 2704 kB' 'Cached: 11936792 kB' 'SwapCached: 0 kB' 'Active: 7980892 kB' 'Inactive: 4478944 kB' 'Active(anon): 7585532 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524176 kB' 'Mapped: 201612 kB' 'Shmem: 7065192 kB' 'KReclaimable: 393768 kB' 'Slab: 1107220 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713452 kB' 'KernelStack: 27280 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 8983088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236780 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.642 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.643 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60111164 kB' 'MemUsed: 5547844 kB' 'SwapCached: 0 kB' 'Active: 2347728 kB' 'Inactive: 141672 kB' 'Active(anon): 2104436 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306336 kB' 'Mapped: 87128 kB' 'AnonPages: 186228 kB' 'Shmem: 1921372 kB' 'KernelStack: 13192 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478604 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.644 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 47843560 kB' 'MemUsed: 12836244 kB' 'SwapCached: 0 kB' 'Active: 5629964 kB' 'Inactive: 4337272 kB' 'Active(anon): 5477896 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337272 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9633204 kB' 'Mapped: 114132 kB' 'AnonPages: 334188 kB' 'Shmem: 5143864 kB' 'KernelStack: 14120 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235928 kB' 'Slab: 628712 kB' 'SReclaimable: 235928 kB' 'SUnreclaim: 392784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.645 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.646 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:57.647 node0=512 expecting 513 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:57.647 node1=513 expecting 512 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:57.647 00:03:57.647 real 0m4.003s 00:03:57.647 user 0m1.540s 00:03:57.647 sys 0m2.517s 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:57.647 14:06:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.647 ************************************ 00:03:57.647 END TEST odd_alloc 00:03:57.647 ************************************ 00:03:57.908 14:06:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:57.909 14:06:21 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:57.909 14:06:21 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:57.909 14:06:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.909 ************************************ 00:03:57.909 START TEST custom_alloc 00:03:57.909 ************************************ 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.909 14:06:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.122 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.122 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 106907816 kB' 'MemAvailable: 111154672 kB' 'Buffers: 2704 kB' 'Cached: 11936944 kB' 'SwapCached: 0 kB' 'Active: 7980224 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584864 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522344 kB' 'Mapped: 201192 kB' 'Shmem: 7065344 kB' 'KReclaimable: 393768 kB' 'Slab: 1107540 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713772 kB' 'KernelStack: 27216 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8978284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236636 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.122 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.123 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 106908760 kB' 'MemAvailable: 111155616 kB' 'Buffers: 2704 kB' 'Cached: 11936944 kB' 'SwapCached: 0 kB' 'Active: 7978820 kB' 'Inactive: 4478944 kB' 'Active(anon): 7583460 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520924 kB' 'Mapped: 201168 kB' 'Shmem: 7065344 kB' 'KReclaimable: 393768 kB' 'Slab: 1107540 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713772 kB' 'KernelStack: 27168 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8978300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236636 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 106909384 kB' 'MemAvailable: 111156240 kB' 'Buffers: 2704 kB' 'Cached: 11936964 kB' 'SwapCached: 0 kB' 'Active: 7978312 kB' 'Inactive: 4478944 kB' 'Active(anon): 7582952 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520880 kB' 'Mapped: 201088 kB' 'Shmem: 7065364 kB' 'KReclaimable: 393768 kB' 'Slab: 1107536 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713768 kB' 'KernelStack: 27168 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8978320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236636 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:02.128 nr_hugepages=1536 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.128 resv_hugepages=0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.128 surplus_hugepages=0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.128 anon_hugepages=0 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 106908628 kB' 'MemAvailable: 111155484 kB' 'Buffers: 2704 kB' 'Cached: 11937004 kB' 'SwapCached: 0 kB' 'Active: 7978008 kB' 'Inactive: 4478944 kB' 'Active(anon): 7582648 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520500 kB' 'Mapped: 201088 kB' 'Shmem: 7065404 kB' 'KReclaimable: 393768 kB' 'Slab: 1107536 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713768 kB' 'KernelStack: 27152 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 8978344 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236668 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.128 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60110424 kB' 'MemUsed: 5548584 kB' 'SwapCached: 0 kB' 'Active: 2349628 kB' 'Inactive: 141672 kB' 'Active(anon): 2106336 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306516 kB' 'Mapped: 86976 kB' 'AnonPages: 187972 kB' 'Shmem: 1921552 kB' 'KernelStack: 13224 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478812 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 46798796 kB' 'MemUsed: 13881008 kB' 'SwapCached: 0 kB' 'Active: 5628392 kB' 'Inactive: 4337272 kB' 'Active(anon): 5476324 kB' 'Inactive(anon): 0 kB' 'Active(file): 152068 kB' 'Inactive(file): 4337272 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9633216 kB' 'Mapped: 114112 kB' 'AnonPages: 332528 kB' 'Shmem: 5143876 kB' 'KernelStack: 13928 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 235928 kB' 'Slab: 628724 kB' 'SReclaimable: 235928 kB' 'SUnreclaim: 392796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.131 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.132 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.133 node0=512 expecting 512 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:02.133 node1=1024 expecting 1024 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:02.133 00:04:02.133 real 0m3.955s 00:04:02.133 user 0m1.555s 00:04:02.133 sys 0m2.469s 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:02.133 14:06:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.133 ************************************ 00:04:02.133 END TEST custom_alloc 00:04:02.133 ************************************ 00:04:02.133 14:06:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.133 14:06:25 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:02.133 14:06:25 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:02.133 14:06:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.133 ************************************ 00:04:02.133 START TEST no_shrink_alloc 00:04:02.133 ************************************ 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.133 14:06:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.441 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:05.441 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.441 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107952804 kB' 'MemAvailable: 112199660 kB' 'Buffers: 2704 kB' 'Cached: 11937120 kB' 'SwapCached: 0 kB' 'Active: 7980100 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584740 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521984 kB' 'Mapped: 201208 kB' 'Shmem: 7065520 kB' 'KReclaimable: 393768 kB' 'Slab: 1107580 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713812 kB' 'KernelStack: 27168 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8979096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236636 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.707 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.708 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107953168 kB' 'MemAvailable: 112200024 kB' 'Buffers: 2704 kB' 'Cached: 11937124 kB' 'SwapCached: 0 kB' 'Active: 7979720 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521660 kB' 'Mapped: 201200 kB' 'Shmem: 7065524 kB' 'KReclaimable: 393768 kB' 'Slab: 1107564 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713796 kB' 'KernelStack: 27168 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8979116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236620 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.709 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.710 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107955512 kB' 'MemAvailable: 112202368 kB' 'Buffers: 2704 kB' 'Cached: 11937140 kB' 'SwapCached: 0 kB' 'Active: 7981112 kB' 'Inactive: 4478944 kB' 'Active(anon): 7585752 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523528 kB' 'Mapped: 201124 kB' 'Shmem: 7065540 kB' 'KReclaimable: 393768 kB' 'Slab: 1107544 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713776 kB' 'KernelStack: 27200 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 9000392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236604 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.711 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.712 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.713 nr_hugepages=1024 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.713 resv_hugepages=0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.713 surplus_hugepages=0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.713 anon_hugepages=0 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107956348 kB' 'MemAvailable: 112203204 kB' 'Buffers: 2704 kB' 'Cached: 11937164 kB' 'SwapCached: 0 kB' 'Active: 7979260 kB' 'Inactive: 4478944 kB' 'Active(anon): 7583900 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521616 kB' 'Mapped: 201124 kB' 'Shmem: 7065564 kB' 'KReclaimable: 393768 kB' 'Slab: 1107544 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 713776 kB' 'KernelStack: 27136 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8978792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236572 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.713 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.714 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59054240 kB' 'MemUsed: 6604768 kB' 'SwapCached: 0 kB' 'Active: 2348216 kB' 'Inactive: 141672 kB' 'Active(anon): 2104924 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306660 kB' 'Mapped: 86976 kB' 'AnonPages: 186364 kB' 'Shmem: 1921696 kB' 'KernelStack: 13176 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 478716 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 320876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.715 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.716 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.717 node0=1024 expecting 1024 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.717 14:06:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.932 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:09.932 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:09.932 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107936868 kB' 'MemAvailable: 112183724 kB' 'Buffers: 2704 kB' 'Cached: 11937292 kB' 'SwapCached: 0 kB' 'Active: 7981612 kB' 'Inactive: 4478944 kB' 'Active(anon): 7586252 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523484 kB' 'Mapped: 201244 kB' 'Shmem: 7065692 kB' 'KReclaimable: 393768 kB' 'Slab: 1107856 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 714088 kB' 'KernelStack: 27248 kB' 'PageTables: 8640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8984716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236940 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.932 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.933 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107933144 kB' 'MemAvailable: 112180000 kB' 'Buffers: 2704 kB' 'Cached: 11937292 kB' 'SwapCached: 0 kB' 'Active: 7984732 kB' 'Inactive: 4478944 kB' 'Active(anon): 7589372 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526612 kB' 'Mapped: 201760 kB' 'Shmem: 7065692 kB' 'KReclaimable: 393768 kB' 'Slab: 1107856 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 714088 kB' 'KernelStack: 27312 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8987124 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236940 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.934 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.935 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107931324 kB' 'MemAvailable: 112178180 kB' 'Buffers: 2704 kB' 'Cached: 11937312 kB' 'SwapCached: 0 kB' 'Active: 7985896 kB' 'Inactive: 4478944 kB' 'Active(anon): 7590536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528152 kB' 'Mapped: 201892 kB' 'Shmem: 7065712 kB' 'KReclaimable: 393768 kB' 'Slab: 1107896 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 714128 kB' 'KernelStack: 27248 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8988180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236864 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.936 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.937 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.938 nr_hugepages=1024 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.938 resv_hugepages=0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.938 surplus_hugepages=0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.938 anon_hugepages=0 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 107937092 kB' 'MemAvailable: 112183948 kB' 'Buffers: 2704 kB' 'Cached: 11937332 kB' 'SwapCached: 0 kB' 'Active: 7980164 kB' 'Inactive: 4478944 kB' 'Active(anon): 7584804 kB' 'Inactive(anon): 0 kB' 'Active(file): 395360 kB' 'Inactive(file): 4478944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522868 kB' 'Mapped: 201172 kB' 'Shmem: 7065732 kB' 'KReclaimable: 393768 kB' 'Slab: 1107896 kB' 'SReclaimable: 393768 kB' 'SUnreclaim: 714128 kB' 'KernelStack: 27248 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 8983816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236892 kB' 'VmallocChunk: 0 kB' 'Percpu: 100224 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2950516 kB' 'DirectMap2M: 15603712 kB' 'DirectMap1G: 117440512 kB' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.938 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.939 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59046328 kB' 'MemUsed: 6612680 kB' 'SwapCached: 0 kB' 'Active: 2350960 kB' 'Inactive: 141672 kB' 'Active(anon): 2107668 kB' 'Inactive(anon): 0 kB' 'Active(file): 243292 kB' 'Inactive(file): 141672 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2306780 kB' 'Mapped: 86976 kB' 'AnonPages: 189084 kB' 'Shmem: 1921816 kB' 'KernelStack: 13256 kB' 'PageTables: 3492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 157840 kB' 'Slab: 479060 kB' 'SReclaimable: 157840 kB' 'SUnreclaim: 321220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.940 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:09.941 node0=1024 expecting 1024 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.941 00:04:09.941 real 0m7.928s 00:04:09.941 user 0m3.164s 00:04:09.941 sys 0m4.882s 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.941 14:06:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.941 ************************************ 00:04:09.942 END TEST no_shrink_alloc 00:04:09.942 ************************************ 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:09.942 14:06:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:09.942 00:04:09.942 real 0m28.845s 00:04:09.942 user 0m11.471s 00:04:09.942 sys 0m17.766s 00:04:09.942 14:06:33 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:09.942 14:06:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.942 ************************************ 00:04:09.942 END TEST hugepages 00:04:09.942 ************************************ 00:04:09.942 14:06:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.942 14:06:33 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:09.942 14:06:33 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:09.942 14:06:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.942 ************************************ 00:04:09.942 START TEST driver 00:04:09.942 ************************************ 00:04:09.942 14:06:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:09.942 * Looking for test storage... 00:04:09.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.942 14:06:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:09.942 14:06:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.942 14:06:33 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.223 14:06:38 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:15.223 14:06:38 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.223 14:06:38 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.223 14:06:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.223 ************************************ 00:04:15.223 START TEST guess_driver 00:04:15.223 ************************************ 00:04:15.223 14:06:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:15.224 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:15.224 Looking for driver=vfio-pci 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.224 14:06:38 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.428 14:06:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.712 00:04:24.712 real 0m9.121s 00:04:24.712 user 0m3.002s 00:04:24.712 sys 0m5.368s 00:04:24.712 14:06:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.712 14:06:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 END TEST guess_driver 00:04:24.712 ************************************ 00:04:24.712 00:04:24.712 real 0m14.296s 00:04:24.712 user 0m4.554s 00:04:24.712 sys 0m8.245s 00:04:24.712 14:06:47 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.712 14:06:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 END TEST driver 00:04:24.712 ************************************ 00:04:24.712 14:06:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.712 14:06:47 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:24.712 14:06:47 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:24.712 14:06:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 START TEST devices 00:04:24.712 ************************************ 00:04:24.712 14:06:47 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:24.712 * Looking for test storage... 00:04:24.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.712 14:06:47 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:24.712 14:06:47 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:24.712 14:06:47 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.712 14:06:47 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.967 14:06:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:28.967 14:06:52 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:28.967 No valid GPT data, bailing 00:04:28.967 14:06:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:28.967 14:06:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:28.967 14:06:52 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:28.967 14:06:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:28.968 14:06:52 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.968 14:06:52 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.968 14:06:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:28.968 ************************************ 00:04:28.968 START TEST nvme_mount 00:04:28.968 ************************************ 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:28.968 14:06:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:29.539 Creating new GPT entries in memory. 00:04:29.539 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:29.539 other utilities. 00:04:29.539 14:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:29.539 14:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.539 14:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.539 14:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.539 14:06:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:30.922 Creating new GPT entries in memory. 00:04:30.922 The operation has completed successfully. 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 268554 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.922 14:06:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:34.292 14:06:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:34.553 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.553 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.814 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:34.814 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:34.814 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:34.814 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.814 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.815 14:06:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.019 14:07:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:42.322 14:07:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.582 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.582 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.582 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.582 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.583 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.583 00:04:42.583 real 0m13.869s 00:04:42.583 user 0m4.349s 00:04:42.583 sys 0m7.427s 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:42.583 14:07:06 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.583 ************************************ 00:04:42.583 END TEST nvme_mount 00:04:42.583 ************************************ 00:04:42.583 14:07:06 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.583 14:07:06 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:42.583 14:07:06 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:42.583 14:07:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.583 ************************************ 00:04:42.583 START TEST dm_mount 00:04:42.583 ************************************ 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.583 14:07:06 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:43.643 Creating new GPT entries in memory. 00:04:43.643 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.643 other utilities. 00:04:43.643 14:07:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.643 14:07:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.643 14:07:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.643 14:07:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.643 14:07:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:44.591 Creating new GPT entries in memory. 00:04:44.591 The operation has completed successfully. 00:04:44.591 14:07:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.591 14:07:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.591 14:07:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.591 14:07:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.591 14:07:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.536 The operation has completed successfully. 00:04:45.536 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.536 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.536 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 274097 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.798 14:07:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.006 14:07:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.006 14:07:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.307 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:53.308 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:53.308 00:04:53.308 real 0m10.842s 00:04:53.308 user 0m2.958s 00:04:53.308 sys 0m4.961s 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.308 14:07:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.308 ************************************ 00:04:53.308 END TEST dm_mount 00:04:53.308 ************************************ 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.568 14:07:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.829 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.829 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.829 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.829 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.829 14:07:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:53.829 00:04:53.829 real 0m29.453s 00:04:53.829 user 0m9.072s 00:04:53.829 sys 0m15.242s 00:04:53.829 14:07:17 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.829 14:07:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.829 ************************************ 00:04:53.829 END TEST devices 00:04:53.829 ************************************ 00:04:53.829 00:04:53.829 real 1m40.198s 00:04:53.829 user 0m34.303s 00:04:53.829 sys 0m57.561s 00:04:53.829 14:07:17 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.829 14:07:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.829 ************************************ 00:04:53.829 END TEST setup.sh 00:04:53.829 ************************************ 00:04:53.829 14:07:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:58.037 Hugepages 00:04:58.037 node hugesize free / total 00:04:58.037 node0 1048576kB 0 / 0 00:04:58.037 node0 2048kB 2048 / 2048 00:04:58.037 node1 1048576kB 0 / 0 00:04:58.037 node1 2048kB 0 / 0 00:04:58.037 00:04:58.037 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.037 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:58.037 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:58.037 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:58.037 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:58.037 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:58.037 14:07:21 -- spdk/autotest.sh@130 -- # uname -s 00:04:58.037 14:07:21 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:58.037 14:07:21 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:58.037 14:07:21 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.341 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:01.341 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:02.726 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:02.726 14:07:26 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:03.667 14:07:27 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:03.667 14:07:27 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:03.667 14:07:27 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.667 14:07:27 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:03.667 14:07:27 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:03.667 14:07:27 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:03.667 14:07:27 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.667 14:07:27 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.667 14:07:27 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:03.928 14:07:27 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:03.928 14:07:27 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:03.928 14:07:27 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.132 Waiting for block devices as requested 00:05:08.132 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:08.132 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:08.392 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:08.392 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:08.392 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:08.392 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:08.654 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:08.654 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:08.654 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:08.915 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:08.915 14:07:32 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:08.915 14:07:32 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1501 -- # grep 0000:65:00.0/nvme/nvme 00:05:08.915 14:07:32 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:08.915 14:07:32 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:08.915 14:07:32 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:08.915 14:07:32 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:08.915 14:07:32 -- common/autotest_common.sh@1544 -- # oacs=' 0x5f' 00:05:08.915 14:07:32 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:08.915 14:07:32 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:08.915 14:07:32 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:08.915 14:07:32 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:08.915 14:07:32 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:08.915 14:07:32 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:08.915 14:07:32 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:08.915 14:07:32 -- common/autotest_common.sh@1556 -- # continue 00:05:08.915 14:07:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:08.915 14:07:32 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:08.915 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:08.915 14:07:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:08.915 14:07:32 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:08.915 14:07:32 -- common/autotest_common.sh@10 -- # set +x 00:05:08.915 14:07:32 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.116 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:13.116 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:13.116 14:07:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:13.117 14:07:36 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:13.117 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.117 14:07:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:13.117 14:07:36 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:13.117 14:07:36 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.117 14:07:36 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:13.117 14:07:36 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:13.117 14:07:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:13.117 14:07:36 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:13.117 14:07:36 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:13.117 14:07:36 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.117 14:07:36 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.117 14:07:36 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:13.117 14:07:36 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:05:13.117 14:07:36 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:05:13.117 14:07:36 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:13.117 14:07:36 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:13.117 14:07:36 -- common/autotest_common.sh@1579 -- # device=0xa80a 00:05:13.117 14:07:36 -- common/autotest_common.sh@1580 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:13.117 14:07:36 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:05:13.117 14:07:36 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:05:13.117 14:07:36 -- common/autotest_common.sh@1592 -- # return 0 00:05:13.117 14:07:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:13.117 14:07:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:13.117 14:07:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.117 14:07:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.117 14:07:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:13.117 14:07:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:13.117 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.117 14:07:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:13.117 14:07:36 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:13.117 14:07:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.117 14:07:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.117 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:13.117 ************************************ 00:05:13.117 START TEST env 00:05:13.117 ************************************ 00:05:13.117 14:07:36 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:13.117 * Looking for test storage... 00:05:13.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:13.117 14:07:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:13.117 14:07:36 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.117 14:07:36 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.117 14:07:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.117 ************************************ 00:05:13.117 START TEST env_memory 00:05:13.117 ************************************ 00:05:13.117 14:07:36 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:13.117 00:05:13.117 00:05:13.117 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.117 http://cunit.sourceforge.net/ 00:05:13.117 00:05:13.117 00:05:13.117 Suite: memory 00:05:13.117 Test: alloc and free memory map ...[2024-06-07 14:07:36.708291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:13.117 passed 00:05:13.117 Test: mem map translation ...[2024-06-07 14:07:36.733851] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:13.117 [2024-06-07 14:07:36.733883] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:13.117 [2024-06-07 14:07:36.733930] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:13.117 [2024-06-07 14:07:36.733938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:13.378 passed 00:05:13.378 Test: mem map registration ...[2024-06-07 14:07:36.789333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:13.378 [2024-06-07 14:07:36.789355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:13.378 passed 00:05:13.378 Test: mem map adjacent registrations ...passed 00:05:13.378 00:05:13.378 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.378 suites 1 1 n/a 0 0 00:05:13.378 tests 4 4 4 0 0 00:05:13.378 asserts 152 152 152 0 n/a 00:05:13.378 00:05:13.378 Elapsed time = 0.192 seconds 00:05:13.378 00:05:13.378 real 0m0.207s 00:05:13.378 user 0m0.195s 00:05:13.378 sys 0m0.011s 00:05:13.378 14:07:36 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:13.378 14:07:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:13.378 ************************************ 00:05:13.378 END TEST env_memory 00:05:13.378 ************************************ 00:05:13.378 14:07:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.378 14:07:36 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:13.378 14:07:36 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:13.378 14:07:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.378 ************************************ 00:05:13.378 START TEST env_vtophys 00:05:13.378 ************************************ 00:05:13.378 14:07:36 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:13.378 EAL: lib.eal log level changed from notice to debug 00:05:13.378 EAL: Detected lcore 0 as core 0 on socket 0 00:05:13.378 EAL: Detected lcore 1 as core 1 on socket 0 00:05:13.378 EAL: Detected lcore 2 as core 2 on socket 0 00:05:13.378 EAL: Detected lcore 3 as core 3 on socket 0 00:05:13.378 EAL: Detected lcore 4 as core 4 on socket 0 00:05:13.378 EAL: Detected lcore 5 as core 5 on socket 0 00:05:13.378 EAL: Detected lcore 6 as core 6 on socket 0 00:05:13.378 EAL: Detected lcore 7 as core 7 on socket 0 00:05:13.378 EAL: Detected lcore 8 as core 8 on socket 0 00:05:13.378 EAL: Detected lcore 9 as core 9 on socket 0 00:05:13.378 EAL: Detected lcore 10 as core 10 on socket 0 00:05:13.378 EAL: Detected lcore 11 as core 11 on socket 0 00:05:13.378 EAL: Detected lcore 12 as core 12 on socket 0 00:05:13.378 EAL: Detected lcore 13 as core 13 on socket 0 00:05:13.378 EAL: Detected lcore 14 as core 14 on socket 0 00:05:13.378 EAL: Detected lcore 15 as core 15 on socket 0 00:05:13.378 EAL: Detected lcore 16 as core 16 on socket 0 00:05:13.378 EAL: Detected lcore 17 as core 17 on socket 0 00:05:13.378 EAL: Detected lcore 18 as core 18 on socket 0 00:05:13.378 EAL: Detected lcore 19 as core 19 on socket 0 00:05:13.378 EAL: Detected lcore 20 as core 20 on socket 0 00:05:13.378 EAL: Detected lcore 21 as core 21 on socket 0 00:05:13.378 EAL: Detected lcore 22 as core 22 on socket 0 00:05:13.378 EAL: Detected lcore 23 as core 23 on socket 0 00:05:13.378 EAL: Detected lcore 24 as core 24 on socket 0 00:05:13.378 EAL: Detected lcore 25 as core 25 on socket 0 00:05:13.378 EAL: Detected lcore 26 as core 26 on socket 0 00:05:13.378 EAL: Detected lcore 27 as core 27 on socket 0 00:05:13.378 EAL: Detected lcore 28 as core 28 on socket 0 00:05:13.378 EAL: Detected lcore 29 as core 29 on socket 0 00:05:13.378 EAL: Detected lcore 30 as core 30 on socket 0 00:05:13.378 EAL: Detected lcore 31 as core 31 on socket 0 00:05:13.378 EAL: Detected lcore 32 as core 32 on socket 0 00:05:13.378 EAL: Detected lcore 33 as core 33 on socket 0 00:05:13.378 EAL: Detected lcore 34 as core 34 on socket 0 00:05:13.378 EAL: Detected lcore 35 as core 35 on socket 0 00:05:13.378 EAL: Detected lcore 36 as core 0 on socket 1 00:05:13.378 EAL: Detected lcore 37 as core 1 on socket 1 00:05:13.378 EAL: Detected lcore 38 as core 2 on socket 1 00:05:13.378 EAL: Detected lcore 39 as core 3 on socket 1 00:05:13.378 EAL: Detected lcore 40 as core 4 on socket 1 00:05:13.378 EAL: Detected lcore 41 as core 5 on socket 1 00:05:13.378 EAL: Detected lcore 42 as core 6 on socket 1 00:05:13.378 EAL: Detected lcore 43 as core 7 on socket 1 00:05:13.378 EAL: Detected lcore 44 as core 8 on socket 1 00:05:13.378 EAL: Detected lcore 45 as core 9 on socket 1 00:05:13.378 EAL: Detected lcore 46 as core 10 on socket 1 00:05:13.378 EAL: Detected lcore 47 as core 11 on socket 1 00:05:13.378 EAL: Detected lcore 48 as core 12 on socket 1 00:05:13.378 EAL: Detected lcore 49 as core 13 on socket 1 00:05:13.378 EAL: Detected lcore 50 as core 14 on socket 1 00:05:13.378 EAL: Detected lcore 51 as core 15 on socket 1 00:05:13.378 EAL: Detected lcore 52 as core 16 on socket 1 00:05:13.378 EAL: Detected lcore 53 as core 17 on socket 1 00:05:13.378 EAL: Detected lcore 54 as core 18 on socket 1 00:05:13.378 EAL: Detected lcore 55 as core 19 on socket 1 00:05:13.378 EAL: Detected lcore 56 as core 20 on socket 1 00:05:13.378 EAL: Detected lcore 57 as core 21 on socket 1 00:05:13.378 EAL: Detected lcore 58 as core 22 on socket 1 00:05:13.378 EAL: Detected lcore 59 as core 23 on socket 1 00:05:13.378 EAL: Detected lcore 60 as core 24 on socket 1 00:05:13.378 EAL: Detected lcore 61 as core 25 on socket 1 00:05:13.378 EAL: Detected lcore 62 as core 26 on socket 1 00:05:13.378 EAL: Detected lcore 63 as core 27 on socket 1 00:05:13.378 EAL: Detected lcore 64 as core 28 on socket 1 00:05:13.379 EAL: Detected lcore 65 as core 29 on socket 1 00:05:13.379 EAL: Detected lcore 66 as core 30 on socket 1 00:05:13.379 EAL: Detected lcore 67 as core 31 on socket 1 00:05:13.379 EAL: Detected lcore 68 as core 32 on socket 1 00:05:13.379 EAL: Detected lcore 69 as core 33 on socket 1 00:05:13.379 EAL: Detected lcore 70 as core 34 on socket 1 00:05:13.379 EAL: Detected lcore 71 as core 35 on socket 1 00:05:13.379 EAL: Detected lcore 72 as core 0 on socket 0 00:05:13.379 EAL: Detected lcore 73 as core 1 on socket 0 00:05:13.379 EAL: Detected lcore 74 as core 2 on socket 0 00:05:13.379 EAL: Detected lcore 75 as core 3 on socket 0 00:05:13.379 EAL: Detected lcore 76 as core 4 on socket 0 00:05:13.379 EAL: Detected lcore 77 as core 5 on socket 0 00:05:13.379 EAL: Detected lcore 78 as core 6 on socket 0 00:05:13.379 EAL: Detected lcore 79 as core 7 on socket 0 00:05:13.379 EAL: Detected lcore 80 as core 8 on socket 0 00:05:13.379 EAL: Detected lcore 81 as core 9 on socket 0 00:05:13.379 EAL: Detected lcore 82 as core 10 on socket 0 00:05:13.379 EAL: Detected lcore 83 as core 11 on socket 0 00:05:13.379 EAL: Detected lcore 84 as core 12 on socket 0 00:05:13.379 EAL: Detected lcore 85 as core 13 on socket 0 00:05:13.379 EAL: Detected lcore 86 as core 14 on socket 0 00:05:13.379 EAL: Detected lcore 87 as core 15 on socket 0 00:05:13.379 EAL: Detected lcore 88 as core 16 on socket 0 00:05:13.379 EAL: Detected lcore 89 as core 17 on socket 0 00:05:13.379 EAL: Detected lcore 90 as core 18 on socket 0 00:05:13.379 EAL: Detected lcore 91 as core 19 on socket 0 00:05:13.379 EAL: Detected lcore 92 as core 20 on socket 0 00:05:13.379 EAL: Detected lcore 93 as core 21 on socket 0 00:05:13.379 EAL: Detected lcore 94 as core 22 on socket 0 00:05:13.379 EAL: Detected lcore 95 as core 23 on socket 0 00:05:13.379 EAL: Detected lcore 96 as core 24 on socket 0 00:05:13.379 EAL: Detected lcore 97 as core 25 on socket 0 00:05:13.379 EAL: Detected lcore 98 as core 26 on socket 0 00:05:13.379 EAL: Detected lcore 99 as core 27 on socket 0 00:05:13.379 EAL: Detected lcore 100 as core 28 on socket 0 00:05:13.379 EAL: Detected lcore 101 as core 29 on socket 0 00:05:13.379 EAL: Detected lcore 102 as core 30 on socket 0 00:05:13.379 EAL: Detected lcore 103 as core 31 on socket 0 00:05:13.379 EAL: Detected lcore 104 as core 32 on socket 0 00:05:13.379 EAL: Detected lcore 105 as core 33 on socket 0 00:05:13.379 EAL: Detected lcore 106 as core 34 on socket 0 00:05:13.379 EAL: Detected lcore 107 as core 35 on socket 0 00:05:13.379 EAL: Detected lcore 108 as core 0 on socket 1 00:05:13.379 EAL: Detected lcore 109 as core 1 on socket 1 00:05:13.379 EAL: Detected lcore 110 as core 2 on socket 1 00:05:13.379 EAL: Detected lcore 111 as core 3 on socket 1 00:05:13.379 EAL: Detected lcore 112 as core 4 on socket 1 00:05:13.379 EAL: Detected lcore 113 as core 5 on socket 1 00:05:13.379 EAL: Detected lcore 114 as core 6 on socket 1 00:05:13.379 EAL: Detected lcore 115 as core 7 on socket 1 00:05:13.379 EAL: Detected lcore 116 as core 8 on socket 1 00:05:13.379 EAL: Detected lcore 117 as core 9 on socket 1 00:05:13.379 EAL: Detected lcore 118 as core 10 on socket 1 00:05:13.379 EAL: Detected lcore 119 as core 11 on socket 1 00:05:13.379 EAL: Detected lcore 120 as core 12 on socket 1 00:05:13.379 EAL: Detected lcore 121 as core 13 on socket 1 00:05:13.379 EAL: Detected lcore 122 as core 14 on socket 1 00:05:13.379 EAL: Detected lcore 123 as core 15 on socket 1 00:05:13.379 EAL: Detected lcore 124 as core 16 on socket 1 00:05:13.379 EAL: Detected lcore 125 as core 17 on socket 1 00:05:13.379 EAL: Detected lcore 126 as core 18 on socket 1 00:05:13.379 EAL: Detected lcore 127 as core 19 on socket 1 00:05:13.379 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:13.379 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:13.379 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:13.379 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:13.379 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:13.379 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:13.379 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:13.379 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:13.379 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:13.379 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:13.379 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:13.379 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:13.379 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:13.379 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:13.379 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:13.379 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:13.379 EAL: Maximum logical cores by configuration: 128 00:05:13.379 EAL: Detected CPU lcores: 128 00:05:13.379 EAL: Detected NUMA nodes: 2 00:05:13.379 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:13.379 EAL: Detected shared linkage of DPDK 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:13.379 EAL: Registered [vdev] bus. 00:05:13.379 EAL: bus.vdev log level changed from disabled to notice 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:13.379 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:13.379 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:13.379 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:13.379 EAL: No shared files mode enabled, IPC will be disabled 00:05:13.379 EAL: No shared files mode enabled, IPC is disabled 00:05:13.379 EAL: Bus pci wants IOVA as 'DC' 00:05:13.379 EAL: Bus vdev wants IOVA as 'DC' 00:05:13.379 EAL: Buses did not request a specific IOVA mode. 00:05:13.379 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:13.379 EAL: Selected IOVA mode 'VA' 00:05:13.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.379 EAL: Probing VFIO support... 00:05:13.379 EAL: IOMMU type 1 (Type 1) is supported 00:05:13.379 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:13.379 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:13.379 EAL: VFIO support initialized 00:05:13.379 EAL: Ask a virtual area of 0x2e000 bytes 00:05:13.379 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:13.379 EAL: Setting up physically contiguous memory... 00:05:13.379 EAL: Setting maximum number of open files to 524288 00:05:13.379 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:13.379 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:13.379 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:13.379 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:13.379 EAL: Ask a virtual area of 0x61000 bytes 00:05:13.379 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:13.379 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:13.379 EAL: Ask a virtual area of 0x400000000 bytes 00:05:13.379 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:13.379 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:13.379 EAL: Hugepages will be freed exactly as allocated. 00:05:13.379 EAL: No shared files mode enabled, IPC is disabled 00:05:13.379 EAL: No shared files mode enabled, IPC is disabled 00:05:13.379 EAL: TSC frequency is ~2400000 KHz 00:05:13.379 EAL: Main lcore 0 is ready (tid=7f9536658a00;cpuset=[0]) 00:05:13.379 EAL: Trying to obtain current memory policy. 00:05:13.379 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.379 EAL: Restoring previous memory policy: 0 00:05:13.379 EAL: request: mp_malloc_sync 00:05:13.379 EAL: No shared files mode enabled, IPC is disabled 00:05:13.380 EAL: Heap on socket 0 was expanded by 2MB 00:05:13.380 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:13.641 EAL: Mem event callback 'spdk:(nil)' registered 00:05:13.641 00:05:13.641 00:05:13.641 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.641 http://cunit.sourceforge.net/ 00:05:13.641 00:05:13.641 00:05:13.641 Suite: components_suite 00:05:13.641 Test: vtophys_malloc_test ...passed 00:05:13.641 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 4MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 4MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 6MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 6MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 10MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 10MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 18MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 18MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 34MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 34MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 66MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 66MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 130MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 130MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.641 EAL: Restoring previous memory policy: 4 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was expanded by 258MB 00:05:13.641 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.641 EAL: request: mp_malloc_sync 00:05:13.641 EAL: No shared files mode enabled, IPC is disabled 00:05:13.641 EAL: Heap on socket 0 was shrunk by 258MB 00:05:13.641 EAL: Trying to obtain current memory policy. 00:05:13.641 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.903 EAL: Restoring previous memory policy: 4 00:05:13.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.903 EAL: request: mp_malloc_sync 00:05:13.903 EAL: No shared files mode enabled, IPC is disabled 00:05:13.903 EAL: Heap on socket 0 was expanded by 514MB 00:05:13.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.903 EAL: request: mp_malloc_sync 00:05:13.903 EAL: No shared files mode enabled, IPC is disabled 00:05:13.903 EAL: Heap on socket 0 was shrunk by 514MB 00:05:13.903 EAL: Trying to obtain current memory policy. 00:05:13.903 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:13.903 EAL: Restoring previous memory policy: 4 00:05:13.903 EAL: Calling mem event callback 'spdk:(nil)' 00:05:13.903 EAL: request: mp_malloc_sync 00:05:13.903 EAL: No shared files mode enabled, IPC is disabled 00:05:13.903 EAL: Heap on socket 0 was expanded by 1026MB 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:14.163 passed 00:05:14.163 00:05:14.163 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.163 suites 1 1 n/a 0 0 00:05:14.163 tests 2 2 2 0 0 00:05:14.163 asserts 497 497 497 0 n/a 00:05:14.163 00:05:14.163 Elapsed time = 0.656 seconds 00:05:14.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.163 EAL: request: mp_malloc_sync 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: Heap on socket 0 was shrunk by 2MB 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 EAL: No shared files mode enabled, IPC is disabled 00:05:14.163 00:05:14.163 real 0m0.783s 00:05:14.163 user 0m0.407s 00:05:14.163 sys 0m0.350s 00:05:14.163 14:07:37 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.163 14:07:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:14.163 ************************************ 00:05:14.163 END TEST env_vtophys 00:05:14.163 ************************************ 00:05:14.163 14:07:37 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.163 14:07:37 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:14.163 14:07:37 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.163 14:07:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.163 ************************************ 00:05:14.163 START TEST env_pci 00:05:14.163 ************************************ 00:05:14.163 14:07:37 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:14.422 00:05:14.422 00:05:14.422 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.422 http://cunit.sourceforge.net/ 00:05:14.422 00:05:14.422 00:05:14.422 Suite: pci 00:05:14.422 Test: pci_hook ...[2024-06-07 14:07:37.820958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 286168 has claimed it 00:05:14.422 EAL: Cannot find device (10000:00:01.0) 00:05:14.422 EAL: Failed to attach device on primary process 00:05:14.422 passed 00:05:14.422 00:05:14.422 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.422 suites 1 1 n/a 0 0 00:05:14.422 tests 1 1 1 0 0 00:05:14.422 asserts 25 25 25 0 n/a 00:05:14.422 00:05:14.422 Elapsed time = 0.035 seconds 00:05:14.422 00:05:14.422 real 0m0.052s 00:05:14.422 user 0m0.014s 00:05:14.422 sys 0m0.038s 00:05:14.422 14:07:37 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:14.422 14:07:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:14.422 ************************************ 00:05:14.422 END TEST env_pci 00:05:14.422 ************************************ 00:05:14.422 14:07:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:14.422 14:07:37 env -- env/env.sh@15 -- # uname 00:05:14.422 14:07:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:14.422 14:07:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:14.422 14:07:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.422 14:07:37 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:14.422 14:07:37 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:14.422 14:07:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.422 ************************************ 00:05:14.422 START TEST env_dpdk_post_init 00:05:14.422 ************************************ 00:05:14.422 14:07:37 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:14.422 EAL: Detected CPU lcores: 128 00:05:14.422 EAL: Detected NUMA nodes: 2 00:05:14.422 EAL: Detected shared linkage of DPDK 00:05:14.422 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:14.422 EAL: Selected IOVA mode 'VA' 00:05:14.422 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.422 EAL: VFIO support initialized 00:05:14.422 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:14.422 EAL: Using IOMMU type 1 (Type 1) 00:05:14.682 EAL: Ignore mapping IO port bar(1) 00:05:14.682 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:14.943 EAL: Ignore mapping IO port bar(1) 00:05:14.943 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:15.203 EAL: Ignore mapping IO port bar(1) 00:05:15.203 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:15.203 EAL: Ignore mapping IO port bar(1) 00:05:15.463 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:15.463 EAL: Ignore mapping IO port bar(1) 00:05:15.723 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:15.723 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:15.984 EAL: Ignore mapping IO port bar(1) 00:05:15.984 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:16.262 EAL: Ignore mapping IO port bar(1) 00:05:16.262 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:16.523 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:16.783 EAL: Ignore mapping IO port bar(1) 00:05:16.783 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:16.783 EAL: Ignore mapping IO port bar(1) 00:05:17.044 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:17.044 EAL: Ignore mapping IO port bar(1) 00:05:17.305 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:17.305 EAL: Ignore mapping IO port bar(1) 00:05:17.566 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:17.566 EAL: Ignore mapping IO port bar(1) 00:05:17.566 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:17.826 EAL: Ignore mapping IO port bar(1) 00:05:17.826 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:18.086 EAL: Ignore mapping IO port bar(1) 00:05:18.086 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:18.346 EAL: Ignore mapping IO port bar(1) 00:05:18.346 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:18.346 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:18.346 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:18.346 Starting DPDK initialization... 00:05:18.346 Starting SPDK post initialization... 00:05:18.346 SPDK NVMe probe 00:05:18.346 Attaching to 0000:65:00.0 00:05:18.346 Attached to 0000:65:00.0 00:05:18.346 Cleaning up... 00:05:20.318 00:05:20.318 real 0m5.719s 00:05:20.318 user 0m0.178s 00:05:20.318 sys 0m0.084s 00:05:20.318 14:07:43 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.318 14:07:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 END TEST env_dpdk_post_init 00:05:20.319 ************************************ 00:05:20.319 14:07:43 env -- env/env.sh@26 -- # uname 00:05:20.319 14:07:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.319 14:07:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.319 14:07:43 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.319 14:07:43 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.319 14:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 START TEST env_mem_callbacks 00:05:20.319 ************************************ 00:05:20.319 14:07:43 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.319 EAL: Detected CPU lcores: 128 00:05:20.319 EAL: Detected NUMA nodes: 2 00:05:20.319 EAL: Detected shared linkage of DPDK 00:05:20.319 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.319 EAL: Selected IOVA mode 'VA' 00:05:20.319 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.319 EAL: VFIO support initialized 00:05:20.319 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.319 00:05:20.319 00:05:20.319 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.319 http://cunit.sourceforge.net/ 00:05:20.319 00:05:20.319 00:05:20.319 Suite: memory 00:05:20.319 Test: test ... 00:05:20.319 register 0x200000200000 2097152 00:05:20.319 malloc 3145728 00:05:20.319 register 0x200000400000 4194304 00:05:20.319 buf 0x200000500000 len 3145728 PASSED 00:05:20.319 malloc 64 00:05:20.319 buf 0x2000004fff40 len 64 PASSED 00:05:20.319 malloc 4194304 00:05:20.319 register 0x200000800000 6291456 00:05:20.319 buf 0x200000a00000 len 4194304 PASSED 00:05:20.319 free 0x200000500000 3145728 00:05:20.319 free 0x2000004fff40 64 00:05:20.319 unregister 0x200000400000 4194304 PASSED 00:05:20.319 free 0x200000a00000 4194304 00:05:20.319 unregister 0x200000800000 6291456 PASSED 00:05:20.319 malloc 8388608 00:05:20.319 register 0x200000400000 10485760 00:05:20.319 buf 0x200000600000 len 8388608 PASSED 00:05:20.319 free 0x200000600000 8388608 00:05:20.319 unregister 0x200000400000 10485760 PASSED 00:05:20.319 passed 00:05:20.319 00:05:20.319 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.319 suites 1 1 n/a 0 0 00:05:20.319 tests 1 1 1 0 0 00:05:20.319 asserts 15 15 15 0 n/a 00:05:20.319 00:05:20.319 Elapsed time = 0.005 seconds 00:05:20.319 00:05:20.319 real 0m0.060s 00:05:20.319 user 0m0.011s 00:05:20.319 sys 0m0.049s 00:05:20.319 14:07:43 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.319 14:07:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 END TEST env_mem_callbacks 00:05:20.319 ************************************ 00:05:20.319 00:05:20.319 real 0m7.317s 00:05:20.319 user 0m0.998s 00:05:20.319 sys 0m0.862s 00:05:20.319 14:07:43 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:20.319 14:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 END TEST env 00:05:20.319 ************************************ 00:05:20.319 14:07:43 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.319 14:07:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:20.319 14:07:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:20.319 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 ************************************ 00:05:20.319 START TEST rpc 00:05:20.319 ************************************ 00:05:20.319 14:07:43 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.579 * Looking for test storage... 00:05:20.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.579 14:07:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=287622 00:05:20.579 14:07:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.580 14:07:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 287622 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@830 -- # '[' -z 287622 ']' 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:20.580 14:07:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.580 14:07:44 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:20.580 [2024-06-07 14:07:44.077501] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:20.580 [2024-06-07 14:07:44.077551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid287622 ] 00:05:20.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.580 [2024-06-07 14:07:44.143956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.580 [2024-06-07 14:07:44.178165] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:20.580 [2024-06-07 14:07:44.178212] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 287622' to capture a snapshot of events at runtime. 00:05:20.580 [2024-06-07 14:07:44.178220] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:20.580 [2024-06-07 14:07:44.178226] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:20.580 [2024-06-07 14:07:44.178232] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid287622 for offline analysis/debug. 00:05:20.580 [2024-06-07 14:07:44.178257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.521 14:07:44 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:21.521 14:07:44 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:21.521 14:07:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.521 14:07:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.521 14:07:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.521 14:07:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.521 14:07:44 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.521 14:07:44 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.521 14:07:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 ************************************ 00:05:21.522 START TEST rpc_integrity 00:05:21.522 ************************************ 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.522 { 00:05:21.522 "name": "Malloc0", 00:05:21.522 "aliases": [ 00:05:21.522 "a631644c-d7a7-4793-b383-582685b58936" 00:05:21.522 ], 00:05:21.522 "product_name": "Malloc disk", 00:05:21.522 "block_size": 512, 00:05:21.522 "num_blocks": 16384, 00:05:21.522 "uuid": "a631644c-d7a7-4793-b383-582685b58936", 00:05:21.522 "assigned_rate_limits": { 00:05:21.522 "rw_ios_per_sec": 0, 00:05:21.522 "rw_mbytes_per_sec": 0, 00:05:21.522 "r_mbytes_per_sec": 0, 00:05:21.522 "w_mbytes_per_sec": 0 00:05:21.522 }, 00:05:21.522 "claimed": false, 00:05:21.522 "zoned": false, 00:05:21.522 "supported_io_types": { 00:05:21.522 "read": true, 00:05:21.522 "write": true, 00:05:21.522 "unmap": true, 00:05:21.522 "write_zeroes": true, 00:05:21.522 "flush": true, 00:05:21.522 "reset": true, 00:05:21.522 "compare": false, 00:05:21.522 "compare_and_write": false, 00:05:21.522 "abort": true, 00:05:21.522 "nvme_admin": false, 00:05:21.522 "nvme_io": false 00:05:21.522 }, 00:05:21.522 "memory_domains": [ 00:05:21.522 { 00:05:21.522 "dma_device_id": "system", 00:05:21.522 "dma_device_type": 1 00:05:21.522 }, 00:05:21.522 { 00:05:21.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.522 "dma_device_type": 2 00:05:21.522 } 00:05:21.522 ], 00:05:21.522 "driver_specific": {} 00:05:21.522 } 00:05:21.522 ]' 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 [2024-06-07 14:07:44.988860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:21.522 [2024-06-07 14:07:44.988891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:21.522 [2024-06-07 14:07:44.988903] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeefc10 00:05:21.522 [2024-06-07 14:07:44.988909] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:21.522 [2024-06-07 14:07:44.990176] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:21.522 [2024-06-07 14:07:44.990203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:21.522 Passthru0 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:21.522 { 00:05:21.522 "name": "Malloc0", 00:05:21.522 "aliases": [ 00:05:21.522 "a631644c-d7a7-4793-b383-582685b58936" 00:05:21.522 ], 00:05:21.522 "product_name": "Malloc disk", 00:05:21.522 "block_size": 512, 00:05:21.522 "num_blocks": 16384, 00:05:21.522 "uuid": "a631644c-d7a7-4793-b383-582685b58936", 00:05:21.522 "assigned_rate_limits": { 00:05:21.522 "rw_ios_per_sec": 0, 00:05:21.522 "rw_mbytes_per_sec": 0, 00:05:21.522 "r_mbytes_per_sec": 0, 00:05:21.522 "w_mbytes_per_sec": 0 00:05:21.522 }, 00:05:21.522 "claimed": true, 00:05:21.522 "claim_type": "exclusive_write", 00:05:21.522 "zoned": false, 00:05:21.522 "supported_io_types": { 00:05:21.522 "read": true, 00:05:21.522 "write": true, 00:05:21.522 "unmap": true, 00:05:21.522 "write_zeroes": true, 00:05:21.522 "flush": true, 00:05:21.522 "reset": true, 00:05:21.522 "compare": false, 00:05:21.522 "compare_and_write": false, 00:05:21.522 "abort": true, 00:05:21.522 "nvme_admin": false, 00:05:21.522 "nvme_io": false 00:05:21.522 }, 00:05:21.522 "memory_domains": [ 00:05:21.522 { 00:05:21.522 "dma_device_id": "system", 00:05:21.522 "dma_device_type": 1 00:05:21.522 }, 00:05:21.522 { 00:05:21.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.522 "dma_device_type": 2 00:05:21.522 } 00:05:21.522 ], 00:05:21.522 "driver_specific": {} 00:05:21.522 }, 00:05:21.522 { 00:05:21.522 "name": "Passthru0", 00:05:21.522 "aliases": [ 00:05:21.522 "ce80a425-e30e-55e3-aca5-9fbaabf82049" 00:05:21.522 ], 00:05:21.522 "product_name": "passthru", 00:05:21.522 "block_size": 512, 00:05:21.522 "num_blocks": 16384, 00:05:21.522 "uuid": "ce80a425-e30e-55e3-aca5-9fbaabf82049", 00:05:21.522 "assigned_rate_limits": { 00:05:21.522 "rw_ios_per_sec": 0, 00:05:21.522 "rw_mbytes_per_sec": 0, 00:05:21.522 "r_mbytes_per_sec": 0, 00:05:21.522 "w_mbytes_per_sec": 0 00:05:21.522 }, 00:05:21.522 "claimed": false, 00:05:21.522 "zoned": false, 00:05:21.522 "supported_io_types": { 00:05:21.522 "read": true, 00:05:21.522 "write": true, 00:05:21.522 "unmap": true, 00:05:21.522 "write_zeroes": true, 00:05:21.522 "flush": true, 00:05:21.522 "reset": true, 00:05:21.522 "compare": false, 00:05:21.522 "compare_and_write": false, 00:05:21.522 "abort": true, 00:05:21.522 "nvme_admin": false, 00:05:21.522 "nvme_io": false 00:05:21.522 }, 00:05:21.522 "memory_domains": [ 00:05:21.522 { 00:05:21.522 "dma_device_id": "system", 00:05:21.522 "dma_device_type": 1 00:05:21.522 }, 00:05:21.522 { 00:05:21.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.522 "dma_device_type": 2 00:05:21.522 } 00:05:21.522 ], 00:05:21.522 "driver_specific": { 00:05:21.522 "passthru": { 00:05:21.522 "name": "Passthru0", 00:05:21.522 "base_bdev_name": "Malloc0" 00:05:21.522 } 00:05:21.522 } 00:05:21.522 } 00:05:21.522 ]' 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:21.522 14:07:45 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:21.522 00:05:21.522 real 0m0.291s 00:05:21.522 user 0m0.182s 00:05:21.522 sys 0m0.042s 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.522 14:07:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 ************************************ 00:05:21.522 END TEST rpc_integrity 00:05:21.522 ************************************ 00:05:21.783 14:07:45 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:21.783 14:07:45 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.783 14:07:45 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.783 14:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 ************************************ 00:05:21.783 START TEST rpc_plugins 00:05:21.783 ************************************ 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:21.783 { 00:05:21.783 "name": "Malloc1", 00:05:21.783 "aliases": [ 00:05:21.783 "8a710f7b-22a2-4b7b-a366-37d830260570" 00:05:21.783 ], 00:05:21.783 "product_name": "Malloc disk", 00:05:21.783 "block_size": 4096, 00:05:21.783 "num_blocks": 256, 00:05:21.783 "uuid": "8a710f7b-22a2-4b7b-a366-37d830260570", 00:05:21.783 "assigned_rate_limits": { 00:05:21.783 "rw_ios_per_sec": 0, 00:05:21.783 "rw_mbytes_per_sec": 0, 00:05:21.783 "r_mbytes_per_sec": 0, 00:05:21.783 "w_mbytes_per_sec": 0 00:05:21.783 }, 00:05:21.783 "claimed": false, 00:05:21.783 "zoned": false, 00:05:21.783 "supported_io_types": { 00:05:21.783 "read": true, 00:05:21.783 "write": true, 00:05:21.783 "unmap": true, 00:05:21.783 "write_zeroes": true, 00:05:21.783 "flush": true, 00:05:21.783 "reset": true, 00:05:21.783 "compare": false, 00:05:21.783 "compare_and_write": false, 00:05:21.783 "abort": true, 00:05:21.783 "nvme_admin": false, 00:05:21.783 "nvme_io": false 00:05:21.783 }, 00:05:21.783 "memory_domains": [ 00:05:21.783 { 00:05:21.783 "dma_device_id": "system", 00:05:21.783 "dma_device_type": 1 00:05:21.783 }, 00:05:21.783 { 00:05:21.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.783 "dma_device_type": 2 00:05:21.783 } 00:05:21.783 ], 00:05:21.783 "driver_specific": {} 00:05:21.783 } 00:05:21.783 ]' 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.783 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:21.783 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:21.784 14:07:45 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:21.784 00:05:21.784 real 0m0.139s 00:05:21.784 user 0m0.086s 00:05:21.784 sys 0m0.018s 00:05:21.784 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:21.784 14:07:45 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:21.784 ************************************ 00:05:21.784 END TEST rpc_plugins 00:05:21.784 ************************************ 00:05:21.784 14:07:45 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:21.784 14:07:45 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:21.784 14:07:45 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:21.784 14:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.784 ************************************ 00:05:21.784 START TEST rpc_trace_cmd_test 00:05:21.784 ************************************ 00:05:21.784 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:21.784 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:21.784 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:21.784 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.784 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.044 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid287622", 00:05:22.044 "tpoint_group_mask": "0x8", 00:05:22.044 "iscsi_conn": { 00:05:22.044 "mask": "0x2", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "scsi": { 00:05:22.044 "mask": "0x4", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "bdev": { 00:05:22.044 "mask": "0x8", 00:05:22.044 "tpoint_mask": "0xffffffffffffffff" 00:05:22.044 }, 00:05:22.044 "nvmf_rdma": { 00:05:22.044 "mask": "0x10", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "nvmf_tcp": { 00:05:22.044 "mask": "0x20", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "ftl": { 00:05:22.044 "mask": "0x40", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "blobfs": { 00:05:22.044 "mask": "0x80", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "dsa": { 00:05:22.044 "mask": "0x200", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "thread": { 00:05:22.044 "mask": "0x400", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "nvme_pcie": { 00:05:22.044 "mask": "0x800", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "iaa": { 00:05:22.044 "mask": "0x1000", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "nvme_tcp": { 00:05:22.044 "mask": "0x2000", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "bdev_nvme": { 00:05:22.044 "mask": "0x4000", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 }, 00:05:22.044 "sock": { 00:05:22.044 "mask": "0x8000", 00:05:22.044 "tpoint_mask": "0x0" 00:05:22.044 } 00:05:22.044 }' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.044 00:05:22.044 real 0m0.209s 00:05:22.044 user 0m0.170s 00:05:22.044 sys 0m0.030s 00:05:22.044 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.045 14:07:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.045 ************************************ 00:05:22.045 END TEST rpc_trace_cmd_test 00:05:22.045 ************************************ 00:05:22.045 14:07:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.045 14:07:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.045 14:07:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.045 14:07:45 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.045 14:07:45 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.045 14:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 ************************************ 00:05:22.306 START TEST rpc_daemon_integrity 00:05:22.306 ************************************ 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.306 { 00:05:22.306 "name": "Malloc2", 00:05:22.306 "aliases": [ 00:05:22.306 "9d77837e-91c7-4a3a-a86b-e6368ed34277" 00:05:22.306 ], 00:05:22.306 "product_name": "Malloc disk", 00:05:22.306 "block_size": 512, 00:05:22.306 "num_blocks": 16384, 00:05:22.306 "uuid": "9d77837e-91c7-4a3a-a86b-e6368ed34277", 00:05:22.306 "assigned_rate_limits": { 00:05:22.306 "rw_ios_per_sec": 0, 00:05:22.306 "rw_mbytes_per_sec": 0, 00:05:22.306 "r_mbytes_per_sec": 0, 00:05:22.306 "w_mbytes_per_sec": 0 00:05:22.306 }, 00:05:22.306 "claimed": false, 00:05:22.306 "zoned": false, 00:05:22.306 "supported_io_types": { 00:05:22.306 "read": true, 00:05:22.306 "write": true, 00:05:22.306 "unmap": true, 00:05:22.306 "write_zeroes": true, 00:05:22.306 "flush": true, 00:05:22.306 "reset": true, 00:05:22.306 "compare": false, 00:05:22.306 "compare_and_write": false, 00:05:22.306 "abort": true, 00:05:22.306 "nvme_admin": false, 00:05:22.306 "nvme_io": false 00:05:22.306 }, 00:05:22.306 "memory_domains": [ 00:05:22.306 { 00:05:22.306 "dma_device_id": "system", 00:05:22.306 "dma_device_type": 1 00:05:22.306 }, 00:05:22.306 { 00:05:22.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.306 "dma_device_type": 2 00:05:22.306 } 00:05:22.306 ], 00:05:22.306 "driver_specific": {} 00:05:22.306 } 00:05:22.306 ]' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 [2024-06-07 14:07:45.851200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.306 [2024-06-07 14:07:45.851228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.306 [2024-06-07 14:07:45.851239] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd419f0 00:05:22.306 [2024-06-07 14:07:45.851246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.306 [2024-06-07 14:07:45.852441] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.306 [2024-06-07 14:07:45.852460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.306 Passthru0 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.306 { 00:05:22.306 "name": "Malloc2", 00:05:22.306 "aliases": [ 00:05:22.306 "9d77837e-91c7-4a3a-a86b-e6368ed34277" 00:05:22.306 ], 00:05:22.306 "product_name": "Malloc disk", 00:05:22.306 "block_size": 512, 00:05:22.306 "num_blocks": 16384, 00:05:22.306 "uuid": "9d77837e-91c7-4a3a-a86b-e6368ed34277", 00:05:22.306 "assigned_rate_limits": { 00:05:22.306 "rw_ios_per_sec": 0, 00:05:22.306 "rw_mbytes_per_sec": 0, 00:05:22.306 "r_mbytes_per_sec": 0, 00:05:22.306 "w_mbytes_per_sec": 0 00:05:22.306 }, 00:05:22.306 "claimed": true, 00:05:22.306 "claim_type": "exclusive_write", 00:05:22.306 "zoned": false, 00:05:22.306 "supported_io_types": { 00:05:22.306 "read": true, 00:05:22.306 "write": true, 00:05:22.306 "unmap": true, 00:05:22.306 "write_zeroes": true, 00:05:22.306 "flush": true, 00:05:22.306 "reset": true, 00:05:22.306 "compare": false, 00:05:22.306 "compare_and_write": false, 00:05:22.306 "abort": true, 00:05:22.306 "nvme_admin": false, 00:05:22.306 "nvme_io": false 00:05:22.306 }, 00:05:22.306 "memory_domains": [ 00:05:22.306 { 00:05:22.306 "dma_device_id": "system", 00:05:22.306 "dma_device_type": 1 00:05:22.306 }, 00:05:22.306 { 00:05:22.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.306 "dma_device_type": 2 00:05:22.306 } 00:05:22.306 ], 00:05:22.306 "driver_specific": {} 00:05:22.306 }, 00:05:22.306 { 00:05:22.306 "name": "Passthru0", 00:05:22.306 "aliases": [ 00:05:22.306 "0b6fc503-7164-5afb-a63f-e3554e12398d" 00:05:22.306 ], 00:05:22.306 "product_name": "passthru", 00:05:22.306 "block_size": 512, 00:05:22.306 "num_blocks": 16384, 00:05:22.306 "uuid": "0b6fc503-7164-5afb-a63f-e3554e12398d", 00:05:22.306 "assigned_rate_limits": { 00:05:22.306 "rw_ios_per_sec": 0, 00:05:22.306 "rw_mbytes_per_sec": 0, 00:05:22.306 "r_mbytes_per_sec": 0, 00:05:22.306 "w_mbytes_per_sec": 0 00:05:22.306 }, 00:05:22.306 "claimed": false, 00:05:22.306 "zoned": false, 00:05:22.306 "supported_io_types": { 00:05:22.306 "read": true, 00:05:22.306 "write": true, 00:05:22.306 "unmap": true, 00:05:22.306 "write_zeroes": true, 00:05:22.306 "flush": true, 00:05:22.306 "reset": true, 00:05:22.306 "compare": false, 00:05:22.306 "compare_and_write": false, 00:05:22.306 "abort": true, 00:05:22.306 "nvme_admin": false, 00:05:22.306 "nvme_io": false 00:05:22.306 }, 00:05:22.306 "memory_domains": [ 00:05:22.306 { 00:05:22.306 "dma_device_id": "system", 00:05:22.306 "dma_device_type": 1 00:05:22.306 }, 00:05:22.306 { 00:05:22.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.306 "dma_device_type": 2 00:05:22.306 } 00:05:22.306 ], 00:05:22.306 "driver_specific": { 00:05:22.306 "passthru": { 00:05:22.306 "name": "Passthru0", 00:05:22.306 "base_bdev_name": "Malloc2" 00:05:22.306 } 00:05:22.306 } 00:05:22.306 } 00:05:22.306 ]' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.306 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:22.307 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.567 14:07:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:22.567 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.567 14:07:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.567 14:07:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.567 00:05:22.567 real 0m0.296s 00:05:22.567 user 0m0.186s 00:05:22.567 sys 0m0.043s 00:05:22.567 14:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.567 14:07:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.567 ************************************ 00:05:22.567 END TEST rpc_daemon_integrity 00:05:22.567 ************************************ 00:05:22.567 14:07:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.567 14:07:46 rpc -- rpc/rpc.sh@84 -- # killprocess 287622 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@949 -- # '[' -z 287622 ']' 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@953 -- # kill -0 287622 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@954 -- # uname 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 287622 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 287622' 00:05:22.567 killing process with pid 287622 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@968 -- # kill 287622 00:05:22.567 14:07:46 rpc -- common/autotest_common.sh@973 -- # wait 287622 00:05:22.827 00:05:22.827 real 0m2.361s 00:05:22.827 user 0m3.074s 00:05:22.827 sys 0m0.671s 00:05:22.827 14:07:46 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.827 14:07:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.827 ************************************ 00:05:22.827 END TEST rpc 00:05:22.827 ************************************ 00:05:22.827 14:07:46 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.827 14:07:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.827 14:07:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.827 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:05:22.827 ************************************ 00:05:22.827 START TEST skip_rpc 00:05:22.827 ************************************ 00:05:22.827 14:07:46 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:22.827 * Looking for test storage... 00:05:22.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:22.827 14:07:46 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.827 14:07:46 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.827 14:07:46 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:22.827 14:07:46 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.827 14:07:46 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.827 14:07:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.086 ************************************ 00:05:23.086 START TEST skip_rpc 00:05:23.086 ************************************ 00:05:23.086 14:07:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:23.086 14:07:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=288142 00:05:23.086 14:07:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.086 14:07:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.086 14:07:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.086 [2024-06-07 14:07:46.547257] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:23.086 [2024-06-07 14:07:46.547307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid288142 ] 00:05:23.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.086 [2024-06-07 14:07:46.612775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.086 [2024-06-07 14:07:46.647178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 288142 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 288142 ']' 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 288142 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 288142 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 288142' 00:05:28.368 killing process with pid 288142 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 288142 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 288142 00:05:28.368 00:05:28.368 real 0m5.253s 00:05:28.368 user 0m5.049s 00:05:28.368 sys 0m0.237s 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.368 14:07:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.368 ************************************ 00:05:28.368 END TEST skip_rpc 00:05:28.368 ************************************ 00:05:28.368 14:07:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.368 14:07:51 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.368 14:07:51 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.368 14:07:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.368 ************************************ 00:05:28.368 START TEST skip_rpc_with_json 00:05:28.368 ************************************ 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=289195 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 289195 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 289195 ']' 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.368 14:07:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.368 [2024-06-07 14:07:51.876188] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:28.368 [2024-06-07 14:07:51.876246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid289195 ] 00:05:28.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.368 [2024-06-07 14:07:51.942932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.368 [2024-06-07 14:07:51.979070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.310 [2024-06-07 14:07:52.624616] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:29.310 request: 00:05:29.310 { 00:05:29.310 "trtype": "tcp", 00:05:29.310 "method": "nvmf_get_transports", 00:05:29.310 "req_id": 1 00:05:29.310 } 00:05:29.310 Got JSON-RPC error response 00:05:29.310 response: 00:05:29.310 { 00:05:29.310 "code": -19, 00:05:29.310 "message": "No such device" 00:05:29.310 } 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.310 [2024-06-07 14:07:52.632726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.310 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:29.310 { 00:05:29.310 "subsystems": [ 00:05:29.310 { 00:05:29.310 "subsystem": "vfio_user_target", 00:05:29.310 "config": null 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "keyring", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "iobuf", 00:05:29.310 "config": [ 00:05:29.310 { 00:05:29.310 "method": "iobuf_set_options", 00:05:29.310 "params": { 00:05:29.310 "small_pool_count": 8192, 00:05:29.310 "large_pool_count": 1024, 00:05:29.310 "small_bufsize": 8192, 00:05:29.310 "large_bufsize": 135168 00:05:29.310 } 00:05:29.310 } 00:05:29.310 ] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "sock", 00:05:29.310 "config": [ 00:05:29.310 { 00:05:29.310 "method": "sock_set_default_impl", 00:05:29.310 "params": { 00:05:29.310 "impl_name": "posix" 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "sock_impl_set_options", 00:05:29.310 "params": { 00:05:29.310 "impl_name": "ssl", 00:05:29.310 "recv_buf_size": 4096, 00:05:29.310 "send_buf_size": 4096, 00:05:29.310 "enable_recv_pipe": true, 00:05:29.310 "enable_quickack": false, 00:05:29.310 "enable_placement_id": 0, 00:05:29.310 "enable_zerocopy_send_server": true, 00:05:29.310 "enable_zerocopy_send_client": false, 00:05:29.310 "zerocopy_threshold": 0, 00:05:29.310 "tls_version": 0, 00:05:29.310 "enable_ktls": false 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "sock_impl_set_options", 00:05:29.310 "params": { 00:05:29.310 "impl_name": "posix", 00:05:29.310 "recv_buf_size": 2097152, 00:05:29.310 "send_buf_size": 2097152, 00:05:29.310 "enable_recv_pipe": true, 00:05:29.310 "enable_quickack": false, 00:05:29.310 "enable_placement_id": 0, 00:05:29.310 "enable_zerocopy_send_server": true, 00:05:29.310 "enable_zerocopy_send_client": false, 00:05:29.310 "zerocopy_threshold": 0, 00:05:29.310 "tls_version": 0, 00:05:29.310 "enable_ktls": false 00:05:29.310 } 00:05:29.310 } 00:05:29.310 ] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "vmd", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "accel", 00:05:29.310 "config": [ 00:05:29.310 { 00:05:29.310 "method": "accel_set_options", 00:05:29.310 "params": { 00:05:29.310 "small_cache_size": 128, 00:05:29.310 "large_cache_size": 16, 00:05:29.310 "task_count": 2048, 00:05:29.310 "sequence_count": 2048, 00:05:29.310 "buf_count": 2048 00:05:29.310 } 00:05:29.310 } 00:05:29.310 ] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "bdev", 00:05:29.310 "config": [ 00:05:29.310 { 00:05:29.310 "method": "bdev_set_options", 00:05:29.310 "params": { 00:05:29.310 "bdev_io_pool_size": 65535, 00:05:29.310 "bdev_io_cache_size": 256, 00:05:29.310 "bdev_auto_examine": true, 00:05:29.310 "iobuf_small_cache_size": 128, 00:05:29.310 "iobuf_large_cache_size": 16 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "bdev_raid_set_options", 00:05:29.310 "params": { 00:05:29.310 "process_window_size_kb": 1024 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "bdev_iscsi_set_options", 00:05:29.310 "params": { 00:05:29.310 "timeout_sec": 30 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "bdev_nvme_set_options", 00:05:29.310 "params": { 00:05:29.310 "action_on_timeout": "none", 00:05:29.310 "timeout_us": 0, 00:05:29.310 "timeout_admin_us": 0, 00:05:29.310 "keep_alive_timeout_ms": 10000, 00:05:29.310 "arbitration_burst": 0, 00:05:29.310 "low_priority_weight": 0, 00:05:29.310 "medium_priority_weight": 0, 00:05:29.310 "high_priority_weight": 0, 00:05:29.310 "nvme_adminq_poll_period_us": 10000, 00:05:29.310 "nvme_ioq_poll_period_us": 0, 00:05:29.310 "io_queue_requests": 0, 00:05:29.310 "delay_cmd_submit": true, 00:05:29.310 "transport_retry_count": 4, 00:05:29.310 "bdev_retry_count": 3, 00:05:29.310 "transport_ack_timeout": 0, 00:05:29.310 "ctrlr_loss_timeout_sec": 0, 00:05:29.310 "reconnect_delay_sec": 0, 00:05:29.310 "fast_io_fail_timeout_sec": 0, 00:05:29.310 "disable_auto_failback": false, 00:05:29.310 "generate_uuids": false, 00:05:29.310 "transport_tos": 0, 00:05:29.310 "nvme_error_stat": false, 00:05:29.310 "rdma_srq_size": 0, 00:05:29.310 "io_path_stat": false, 00:05:29.310 "allow_accel_sequence": false, 00:05:29.310 "rdma_max_cq_size": 0, 00:05:29.310 "rdma_cm_event_timeout_ms": 0, 00:05:29.310 "dhchap_digests": [ 00:05:29.310 "sha256", 00:05:29.310 "sha384", 00:05:29.310 "sha512" 00:05:29.310 ], 00:05:29.310 "dhchap_dhgroups": [ 00:05:29.310 "null", 00:05:29.310 "ffdhe2048", 00:05:29.310 "ffdhe3072", 00:05:29.310 "ffdhe4096", 00:05:29.310 "ffdhe6144", 00:05:29.310 "ffdhe8192" 00:05:29.310 ] 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "bdev_nvme_set_hotplug", 00:05:29.310 "params": { 00:05:29.310 "period_us": 100000, 00:05:29.310 "enable": false 00:05:29.310 } 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "method": "bdev_wait_for_examine" 00:05:29.310 } 00:05:29.310 ] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "scsi", 00:05:29.310 "config": null 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "scheduler", 00:05:29.310 "config": [ 00:05:29.310 { 00:05:29.310 "method": "framework_set_scheduler", 00:05:29.310 "params": { 00:05:29.310 "name": "static" 00:05:29.310 } 00:05:29.310 } 00:05:29.310 ] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "vhost_scsi", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "vhost_blk", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "ublk", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.310 "subsystem": "nbd", 00:05:29.310 "config": [] 00:05:29.310 }, 00:05:29.310 { 00:05:29.311 "subsystem": "nvmf", 00:05:29.311 "config": [ 00:05:29.311 { 00:05:29.311 "method": "nvmf_set_config", 00:05:29.311 "params": { 00:05:29.311 "discovery_filter": "match_any", 00:05:29.311 "admin_cmd_passthru": { 00:05:29.311 "identify_ctrlr": false 00:05:29.311 } 00:05:29.311 } 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "method": "nvmf_set_max_subsystems", 00:05:29.311 "params": { 00:05:29.311 "max_subsystems": 1024 00:05:29.311 } 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "method": "nvmf_set_crdt", 00:05:29.311 "params": { 00:05:29.311 "crdt1": 0, 00:05:29.311 "crdt2": 0, 00:05:29.311 "crdt3": 0 00:05:29.311 } 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "method": "nvmf_create_transport", 00:05:29.311 "params": { 00:05:29.311 "trtype": "TCP", 00:05:29.311 "max_queue_depth": 128, 00:05:29.311 "max_io_qpairs_per_ctrlr": 127, 00:05:29.311 "in_capsule_data_size": 4096, 00:05:29.311 "max_io_size": 131072, 00:05:29.311 "io_unit_size": 131072, 00:05:29.311 "max_aq_depth": 128, 00:05:29.311 "num_shared_buffers": 511, 00:05:29.311 "buf_cache_size": 4294967295, 00:05:29.311 "dif_insert_or_strip": false, 00:05:29.311 "zcopy": false, 00:05:29.311 "c2h_success": true, 00:05:29.311 "sock_priority": 0, 00:05:29.311 "abort_timeout_sec": 1, 00:05:29.311 "ack_timeout": 0, 00:05:29.311 "data_wr_pool_size": 0 00:05:29.311 } 00:05:29.311 } 00:05:29.311 ] 00:05:29.311 }, 00:05:29.311 { 00:05:29.311 "subsystem": "iscsi", 00:05:29.311 "config": [ 00:05:29.311 { 00:05:29.311 "method": "iscsi_set_options", 00:05:29.311 "params": { 00:05:29.311 "node_base": "iqn.2016-06.io.spdk", 00:05:29.311 "max_sessions": 128, 00:05:29.311 "max_connections_per_session": 2, 00:05:29.311 "max_queue_depth": 64, 00:05:29.311 "default_time2wait": 2, 00:05:29.311 "default_time2retain": 20, 00:05:29.311 "first_burst_length": 8192, 00:05:29.311 "immediate_data": true, 00:05:29.311 "allow_duplicated_isid": false, 00:05:29.311 "error_recovery_level": 0, 00:05:29.311 "nop_timeout": 60, 00:05:29.311 "nop_in_interval": 30, 00:05:29.311 "disable_chap": false, 00:05:29.311 "require_chap": false, 00:05:29.311 "mutual_chap": false, 00:05:29.311 "chap_group": 0, 00:05:29.311 "max_large_datain_per_connection": 64, 00:05:29.311 "max_r2t_per_connection": 4, 00:05:29.311 "pdu_pool_size": 36864, 00:05:29.311 "immediate_data_pool_size": 16384, 00:05:29.311 "data_out_pool_size": 2048 00:05:29.311 } 00:05:29.311 } 00:05:29.311 ] 00:05:29.311 } 00:05:29.311 ] 00:05:29.311 } 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 289195 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 289195 ']' 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 289195 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 289195 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 289195' 00:05:29.311 killing process with pid 289195 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 289195 00:05:29.311 14:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 289195 00:05:29.571 14:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=289516 00:05:29.571 14:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.571 14:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 289516 ']' 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 289516' 00:05:34.854 killing process with pid 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 289516 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.854 00:05:34.854 real 0m6.470s 00:05:34.854 user 0m6.323s 00:05:34.854 sys 0m0.523s 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.854 ************************************ 00:05:34.854 END TEST skip_rpc_with_json 00:05:34.854 ************************************ 00:05:34.854 14:07:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.854 ************************************ 00:05:34.854 START TEST skip_rpc_with_delay 00:05:34.854 ************************************ 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.854 [2024-06-07 14:07:58.428021] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:34.854 [2024-06-07 14:07:58.428093] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:34.854 00:05:34.854 real 0m0.081s 00:05:34.854 user 0m0.055s 00:05:34.854 sys 0m0.026s 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.854 14:07:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:34.854 ************************************ 00:05:34.854 END TEST skip_rpc_with_delay 00:05:34.854 ************************************ 00:05:34.854 14:07:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:34.854 14:07:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:34.854 14:07:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.854 14:07:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.115 ************************************ 00:05:35.115 START TEST exit_on_failed_rpc_init 00:05:35.115 ************************************ 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=290583 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 290583 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 290583 ']' 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.115 14:07:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.115 [2024-06-07 14:07:58.575796] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:35.115 [2024-06-07 14:07:58.575846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290583 ] 00:05:35.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.115 [2024-06-07 14:07:58.642969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.115 [2024-06-07 14:07:58.680134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.685 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.945 [2024-06-07 14:07:59.377041] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:35.945 [2024-06-07 14:07:59.377092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid290920 ] 00:05:35.945 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.945 [2024-06-07 14:07:59.457111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.945 [2024-06-07 14:07:59.488374] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.945 [2024-06-07 14:07:59.488430] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:35.945 [2024-06-07 14:07:59.488440] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:35.945 [2024-06-07 14:07:59.488447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 290583 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 290583 ']' 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 290583 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 290583 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 290583' 00:05:35.945 killing process with pid 290583 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 290583 00:05:35.945 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 290583 00:05:36.205 00:05:36.205 real 0m1.263s 00:05:36.205 user 0m1.422s 00:05:36.205 sys 0m0.380s 00:05:36.205 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.205 14:07:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.205 ************************************ 00:05:36.205 END TEST exit_on_failed_rpc_init 00:05:36.205 ************************************ 00:05:36.205 14:07:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:36.205 00:05:36.205 real 0m13.448s 00:05:36.205 user 0m12.990s 00:05:36.205 sys 0m1.426s 00:05:36.205 14:07:59 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.205 14:07:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.205 ************************************ 00:05:36.205 END TEST skip_rpc 00:05:36.205 ************************************ 00:05:36.466 14:07:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.466 14:07:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.466 14:07:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.466 14:07:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.466 ************************************ 00:05:36.466 START TEST rpc_client 00:05:36.466 ************************************ 00:05:36.466 14:07:59 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:36.466 * Looking for test storage... 00:05:36.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:36.466 14:07:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:36.466 OK 00:05:36.466 14:08:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:36.466 00:05:36.466 real 0m0.129s 00:05:36.466 user 0m0.067s 00:05:36.466 sys 0m0.071s 00:05:36.466 14:08:00 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.466 14:08:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:36.466 ************************************ 00:05:36.466 END TEST rpc_client 00:05:36.466 ************************************ 00:05:36.466 14:08:00 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.466 14:08:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.466 14:08:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.466 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.466 ************************************ 00:05:36.466 START TEST json_config 00:05:36.466 ************************************ 00:05:36.466 14:08:00 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.728 14:08:00 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.728 14:08:00 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.728 14:08:00 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.728 14:08:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.728 14:08:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.728 14:08:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.728 14:08:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:36.728 14:08:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@47 -- # : 0 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:36.728 14:08:00 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:36.728 INFO: JSON configuration test init 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.728 14:08:00 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:36.728 14:08:00 json_config -- json_config/common.sh@9 -- # local app=target 00:05:36.728 14:08:00 json_config -- json_config/common.sh@10 -- # shift 00:05:36.728 14:08:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:36.728 14:08:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:36.728 14:08:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:36.728 14:08:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.728 14:08:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:36.728 14:08:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=291041 00:05:36.728 14:08:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:36.728 Waiting for target to run... 00:05:36.728 14:08:00 json_config -- json_config/common.sh@25 -- # waitforlisten 291041 /var/tmp/spdk_tgt.sock 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@830 -- # '[' -z 291041 ']' 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:36.728 14:08:00 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.729 14:08:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:36.729 14:08:00 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:36.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:36.729 14:08:00 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.729 14:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.729 [2024-06-07 14:08:00.263587] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:36.729 [2024-06-07 14:08:00.263643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291041 ] 00:05:36.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.989 [2024-06-07 14:08:00.585486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.989 [2024-06-07 14:08:00.612944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:37.559 14:08:01 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.559 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:37.559 14:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:37.559 14:08:01 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:37.559 14:08:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:38.129 14:08:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:38.129 14:08:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:38.130 14:08:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:38.130 14:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:38.130 14:08:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:38.130 14:08:01 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:38.130 14:08:01 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:38.130 14:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:38.390 14:08:01 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:38.390 14:08:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.390 14:08:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.390 MallocForNvmf0 00:05:38.390 14:08:01 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.390 14:08:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:38.651 MallocForNvmf1 00:05:38.651 14:08:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.651 14:08:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:38.651 [2024-06-07 14:08:02.283899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.913 14:08:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.913 14:08:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:38.913 14:08:02 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:38.913 14:08:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.199 14:08:02 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.199 14:08:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.199 14:08:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.199 14:08:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.459 [2024-06-07 14:08:02.946158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:39.459 14:08:02 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:39.459 14:08:02 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:39.459 14:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.459 14:08:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:39.459 14:08:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:39.459 14:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.459 14:08:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:39.459 14:08:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.459 14:08:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:39.720 MallocBdevForConfigChangeCheck 00:05:39.720 14:08:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:39.720 14:08:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:39.720 14:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.720 14:08:03 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:39.720 14:08:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.979 14:08:03 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:39.980 INFO: shutting down applications... 00:05:39.980 14:08:03 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:39.980 14:08:03 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:39.980 14:08:03 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:39.980 14:08:03 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.593 Calling clear_iscsi_subsystem 00:05:40.593 Calling clear_nvmf_subsystem 00:05:40.593 Calling clear_nbd_subsystem 00:05:40.593 Calling clear_ublk_subsystem 00:05:40.593 Calling clear_vhost_blk_subsystem 00:05:40.593 Calling clear_vhost_scsi_subsystem 00:05:40.593 Calling clear_bdev_subsystem 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.593 14:08:03 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.853 14:08:04 json_config -- json_config/json_config.sh@345 -- # break 00:05:40.853 14:08:04 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:40.853 14:08:04 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:40.853 14:08:04 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.853 14:08:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.853 14:08:04 json_config -- json_config/common.sh@35 -- # [[ -n 291041 ]] 00:05:40.853 14:08:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 291041 00:05:40.853 14:08:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.853 14:08:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.853 14:08:04 json_config -- json_config/common.sh@41 -- # kill -0 291041 00:05:40.853 14:08:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.425 14:08:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.425 14:08:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.425 14:08:04 json_config -- json_config/common.sh@41 -- # kill -0 291041 00:05:41.425 14:08:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.425 14:08:04 json_config -- json_config/common.sh@43 -- # break 00:05:41.425 14:08:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.425 14:08:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.425 SPDK target shutdown done 00:05:41.425 14:08:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:41.425 INFO: relaunching applications... 00:05:41.425 14:08:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.425 14:08:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.425 14:08:04 json_config -- json_config/common.sh@10 -- # shift 00:05:41.425 14:08:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.425 14:08:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.425 14:08:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.425 14:08:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.425 14:08:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.425 14:08:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=292166 00:05:41.425 14:08:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.425 Waiting for target to run... 00:05:41.425 14:08:04 json_config -- json_config/common.sh@25 -- # waitforlisten 292166 /var/tmp/spdk_tgt.sock 00:05:41.425 14:08:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@830 -- # '[' -z 292166 ']' 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:41.425 14:08:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.425 [2024-06-07 14:08:04.832262] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:41.425 [2024-06-07 14:08:04.832319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292166 ] 00:05:41.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.687 [2024-06-07 14:08:05.109557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.687 [2024-06-07 14:08:05.129005] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.257 [2024-06-07 14:08:05.603244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.257 [2024-06-07 14:08:05.635700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:42.257 14:08:05 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:42.257 14:08:05 json_config -- common/autotest_common.sh@863 -- # return 0 00:05:42.257 14:08:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:42.257 00:05:42.257 14:08:05 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:42.257 14:08:05 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:42.257 INFO: Checking if target configuration is the same... 00:05:42.257 14:08:05 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.257 14:08:05 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:42.257 14:08:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.257 + '[' 2 -ne 2 ']' 00:05:42.257 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.257 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.257 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.257 +++ basename /dev/fd/62 00:05:42.257 ++ mktemp /tmp/62.XXX 00:05:42.257 + tmp_file_1=/tmp/62.nwG 00:05:42.257 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.258 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.258 + tmp_file_2=/tmp/spdk_tgt_config.json.23v 00:05:42.258 + ret=0 00:05:42.258 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.519 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.519 + diff -u /tmp/62.nwG /tmp/spdk_tgt_config.json.23v 00:05:42.519 + echo 'INFO: JSON config files are the same' 00:05:42.519 INFO: JSON config files are the same 00:05:42.519 + rm /tmp/62.nwG /tmp/spdk_tgt_config.json.23v 00:05:42.519 + exit 0 00:05:42.519 14:08:06 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:42.519 14:08:06 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:42.519 INFO: changing configuration and checking if this can be detected... 00:05:42.519 14:08:06 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.519 14:08:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.780 14:08:06 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:42.780 14:08:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.780 14:08:06 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.780 + '[' 2 -ne 2 ']' 00:05:42.780 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.780 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.780 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.780 +++ basename /dev/fd/62 00:05:42.780 ++ mktemp /tmp/62.XXX 00:05:42.780 + tmp_file_1=/tmp/62.pV4 00:05:42.780 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.780 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.780 + tmp_file_2=/tmp/spdk_tgt_config.json.yzq 00:05:42.780 + ret=0 00:05:42.780 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.042 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:43.042 + diff -u /tmp/62.pV4 /tmp/spdk_tgt_config.json.yzq 00:05:43.042 + ret=1 00:05:43.042 + echo '=== Start of file: /tmp/62.pV4 ===' 00:05:43.042 + cat /tmp/62.pV4 00:05:43.042 + echo '=== End of file: /tmp/62.pV4 ===' 00:05:43.042 + echo '' 00:05:43.042 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yzq ===' 00:05:43.042 + cat /tmp/spdk_tgt_config.json.yzq 00:05:43.042 + echo '=== End of file: /tmp/spdk_tgt_config.json.yzq ===' 00:05:43.042 + echo '' 00:05:43.042 + rm /tmp/62.pV4 /tmp/spdk_tgt_config.json.yzq 00:05:43.042 + exit 1 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:43.042 INFO: configuration change detected. 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@317 -- # [[ -n 292166 ]] 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.042 14:08:06 json_config -- json_config/json_config.sh@323 -- # killprocess 292166 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@949 -- # '[' -z 292166 ']' 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@953 -- # kill -0 292166 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@954 -- # uname 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 292166 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 292166' 00:05:43.042 killing process with pid 292166 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@968 -- # kill 292166 00:05:43.042 14:08:06 json_config -- common/autotest_common.sh@973 -- # wait 292166 00:05:43.304 14:08:06 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.566 14:08:06 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:43.566 14:08:06 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:43.566 14:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.566 14:08:06 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:43.566 14:08:06 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:43.566 INFO: Success 00:05:43.566 00:05:43.566 real 0m6.898s 00:05:43.566 user 0m8.381s 00:05:43.566 sys 0m1.739s 00:05:43.566 14:08:06 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.566 14:08:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.567 ************************************ 00:05:43.567 END TEST json_config 00:05:43.567 ************************************ 00:05:43.567 14:08:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.567 14:08:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:43.567 14:08:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.567 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.567 ************************************ 00:05:43.567 START TEST json_config_extra_key 00:05:43.567 ************************************ 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.567 14:08:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.567 14:08:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.567 14:08:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.567 14:08:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.567 14:08:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.567 14:08:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.567 14:08:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:43.567 14:08:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.567 14:08:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:43.567 INFO: launching applications... 00:05:43.567 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=292711 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:43.567 Waiting for target to run... 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 292711 /var/tmp/spdk_tgt.sock 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 292711 ']' 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.567 14:08:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:43.567 14:08:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:43.828 [2024-06-07 14:08:07.229752] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:43.828 [2024-06-07 14:08:07.229827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292711 ] 00:05:43.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.089 [2024-06-07 14:08:07.524436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.089 [2024-06-07 14:08:07.542020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.350 14:08:07 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:44.350 14:08:07 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:44.350 00:05:44.350 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:44.350 INFO: shutting down applications... 00:05:44.350 14:08:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 292711 ]] 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 292711 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 292711 00:05:44.350 14:08:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 292711 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:44.921 14:08:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:44.921 SPDK target shutdown done 00:05:44.921 14:08:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:44.921 Success 00:05:44.921 00:05:44.921 real 0m1.432s 00:05:44.921 user 0m1.035s 00:05:44.921 sys 0m0.385s 00:05:44.921 14:08:08 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.921 14:08:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.921 ************************************ 00:05:44.921 END TEST json_config_extra_key 00:05:44.921 ************************************ 00:05:44.921 14:08:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:44.921 14:08:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:44.921 14:08:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.921 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 ************************************ 00:05:45.183 START TEST alias_rpc 00:05:45.183 ************************************ 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.183 * Looking for test storage... 00:05:45.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.183 14:08:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.183 14:08:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=293018 00:05:45.183 14:08:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 293018 00:05:45.183 14:08:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 293018 ']' 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:45.183 14:08:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.183 [2024-06-07 14:08:08.724721] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:45.183 [2024-06-07 14:08:08.724774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293018 ] 00:05:45.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.183 [2024-06-07 14:08:08.790334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.183 [2024-06-07 14:08:08.821957] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:46.124 14:08:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.124 14:08:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 293018 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 293018 ']' 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 293018 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 293018 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 293018' 00:05:46.124 killing process with pid 293018 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@968 -- # kill 293018 00:05:46.124 14:08:09 alias_rpc -- common/autotest_common.sh@973 -- # wait 293018 00:05:46.385 00:05:46.385 real 0m1.355s 00:05:46.385 user 0m1.519s 00:05:46.385 sys 0m0.343s 00:05:46.385 14:08:09 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.385 14:08:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.385 ************************************ 00:05:46.385 END TEST alias_rpc 00:05:46.385 ************************************ 00:05:46.385 14:08:09 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:46.385 14:08:09 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.385 14:08:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:46.385 14:08:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.385 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:05:46.385 ************************************ 00:05:46.385 START TEST spdkcli_tcp 00:05:46.385 ************************************ 00:05:46.385 14:08:09 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.645 * Looking for test storage... 00:05:46.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.645 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=293399 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 293399 00:05:46.646 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 293399 ']' 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:46.646 14:08:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.646 [2024-06-07 14:08:10.169090] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:46.646 [2024-06-07 14:08:10.169166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293399 ] 00:05:46.646 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.646 [2024-06-07 14:08:10.239882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.646 [2024-06-07 14:08:10.280594] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.646 [2024-06-07 14:08:10.280596] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.590 14:08:10 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:47.590 14:08:10 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:05:47.590 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.590 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=293675 00:05:47.590 14:08:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.590 [ 00:05:47.590 "bdev_malloc_delete", 00:05:47.590 "bdev_malloc_create", 00:05:47.590 "bdev_null_resize", 00:05:47.590 "bdev_null_delete", 00:05:47.590 "bdev_null_create", 00:05:47.590 "bdev_nvme_cuse_unregister", 00:05:47.590 "bdev_nvme_cuse_register", 00:05:47.590 "bdev_opal_new_user", 00:05:47.590 "bdev_opal_set_lock_state", 00:05:47.590 "bdev_opal_delete", 00:05:47.590 "bdev_opal_get_info", 00:05:47.590 "bdev_opal_create", 00:05:47.590 "bdev_nvme_opal_revert", 00:05:47.590 "bdev_nvme_opal_init", 00:05:47.590 "bdev_nvme_send_cmd", 00:05:47.590 "bdev_nvme_get_path_iostat", 00:05:47.590 "bdev_nvme_get_mdns_discovery_info", 00:05:47.590 "bdev_nvme_stop_mdns_discovery", 00:05:47.590 "bdev_nvme_start_mdns_discovery", 00:05:47.590 "bdev_nvme_set_multipath_policy", 00:05:47.590 "bdev_nvme_set_preferred_path", 00:05:47.590 "bdev_nvme_get_io_paths", 00:05:47.590 "bdev_nvme_remove_error_injection", 00:05:47.590 "bdev_nvme_add_error_injection", 00:05:47.590 "bdev_nvme_get_discovery_info", 00:05:47.590 "bdev_nvme_stop_discovery", 00:05:47.590 "bdev_nvme_start_discovery", 00:05:47.590 "bdev_nvme_get_controller_health_info", 00:05:47.590 "bdev_nvme_disable_controller", 00:05:47.590 "bdev_nvme_enable_controller", 00:05:47.590 "bdev_nvme_reset_controller", 00:05:47.590 "bdev_nvme_get_transport_statistics", 00:05:47.590 "bdev_nvme_apply_firmware", 00:05:47.590 "bdev_nvme_detach_controller", 00:05:47.590 "bdev_nvme_get_controllers", 00:05:47.590 "bdev_nvme_attach_controller", 00:05:47.590 "bdev_nvme_set_hotplug", 00:05:47.590 "bdev_nvme_set_options", 00:05:47.590 "bdev_passthru_delete", 00:05:47.590 "bdev_passthru_create", 00:05:47.590 "bdev_lvol_set_parent_bdev", 00:05:47.590 "bdev_lvol_set_parent", 00:05:47.590 "bdev_lvol_check_shallow_copy", 00:05:47.590 "bdev_lvol_start_shallow_copy", 00:05:47.590 "bdev_lvol_grow_lvstore", 00:05:47.590 "bdev_lvol_get_lvols", 00:05:47.590 "bdev_lvol_get_lvstores", 00:05:47.590 "bdev_lvol_delete", 00:05:47.590 "bdev_lvol_set_read_only", 00:05:47.590 "bdev_lvol_resize", 00:05:47.590 "bdev_lvol_decouple_parent", 00:05:47.590 "bdev_lvol_inflate", 00:05:47.590 "bdev_lvol_rename", 00:05:47.590 "bdev_lvol_clone_bdev", 00:05:47.590 "bdev_lvol_clone", 00:05:47.590 "bdev_lvol_snapshot", 00:05:47.590 "bdev_lvol_create", 00:05:47.590 "bdev_lvol_delete_lvstore", 00:05:47.590 "bdev_lvol_rename_lvstore", 00:05:47.590 "bdev_lvol_create_lvstore", 00:05:47.590 "bdev_raid_set_options", 00:05:47.590 "bdev_raid_remove_base_bdev", 00:05:47.590 "bdev_raid_add_base_bdev", 00:05:47.590 "bdev_raid_delete", 00:05:47.590 "bdev_raid_create", 00:05:47.590 "bdev_raid_get_bdevs", 00:05:47.590 "bdev_error_inject_error", 00:05:47.590 "bdev_error_delete", 00:05:47.590 "bdev_error_create", 00:05:47.590 "bdev_split_delete", 00:05:47.590 "bdev_split_create", 00:05:47.590 "bdev_delay_delete", 00:05:47.590 "bdev_delay_create", 00:05:47.590 "bdev_delay_update_latency", 00:05:47.590 "bdev_zone_block_delete", 00:05:47.590 "bdev_zone_block_create", 00:05:47.590 "blobfs_create", 00:05:47.590 "blobfs_detect", 00:05:47.590 "blobfs_set_cache_size", 00:05:47.590 "bdev_aio_delete", 00:05:47.590 "bdev_aio_rescan", 00:05:47.590 "bdev_aio_create", 00:05:47.590 "bdev_ftl_set_property", 00:05:47.590 "bdev_ftl_get_properties", 00:05:47.590 "bdev_ftl_get_stats", 00:05:47.590 "bdev_ftl_unmap", 00:05:47.590 "bdev_ftl_unload", 00:05:47.590 "bdev_ftl_delete", 00:05:47.590 "bdev_ftl_load", 00:05:47.590 "bdev_ftl_create", 00:05:47.590 "bdev_virtio_attach_controller", 00:05:47.590 "bdev_virtio_scsi_get_devices", 00:05:47.590 "bdev_virtio_detach_controller", 00:05:47.590 "bdev_virtio_blk_set_hotplug", 00:05:47.590 "bdev_iscsi_delete", 00:05:47.590 "bdev_iscsi_create", 00:05:47.590 "bdev_iscsi_set_options", 00:05:47.590 "accel_error_inject_error", 00:05:47.590 "ioat_scan_accel_module", 00:05:47.590 "dsa_scan_accel_module", 00:05:47.590 "iaa_scan_accel_module", 00:05:47.590 "vfu_virtio_create_scsi_endpoint", 00:05:47.590 "vfu_virtio_scsi_remove_target", 00:05:47.590 "vfu_virtio_scsi_add_target", 00:05:47.590 "vfu_virtio_create_blk_endpoint", 00:05:47.590 "vfu_virtio_delete_endpoint", 00:05:47.590 "keyring_file_remove_key", 00:05:47.590 "keyring_file_add_key", 00:05:47.590 "keyring_linux_set_options", 00:05:47.590 "iscsi_get_histogram", 00:05:47.590 "iscsi_enable_histogram", 00:05:47.590 "iscsi_set_options", 00:05:47.590 "iscsi_get_auth_groups", 00:05:47.590 "iscsi_auth_group_remove_secret", 00:05:47.590 "iscsi_auth_group_add_secret", 00:05:47.590 "iscsi_delete_auth_group", 00:05:47.590 "iscsi_create_auth_group", 00:05:47.590 "iscsi_set_discovery_auth", 00:05:47.590 "iscsi_get_options", 00:05:47.590 "iscsi_target_node_request_logout", 00:05:47.590 "iscsi_target_node_set_redirect", 00:05:47.590 "iscsi_target_node_set_auth", 00:05:47.590 "iscsi_target_node_add_lun", 00:05:47.590 "iscsi_get_stats", 00:05:47.590 "iscsi_get_connections", 00:05:47.590 "iscsi_portal_group_set_auth", 00:05:47.590 "iscsi_start_portal_group", 00:05:47.590 "iscsi_delete_portal_group", 00:05:47.590 "iscsi_create_portal_group", 00:05:47.590 "iscsi_get_portal_groups", 00:05:47.590 "iscsi_delete_target_node", 00:05:47.590 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.590 "iscsi_target_node_add_pg_ig_maps", 00:05:47.590 "iscsi_create_target_node", 00:05:47.590 "iscsi_get_target_nodes", 00:05:47.590 "iscsi_delete_initiator_group", 00:05:47.590 "iscsi_initiator_group_remove_initiators", 00:05:47.590 "iscsi_initiator_group_add_initiators", 00:05:47.590 "iscsi_create_initiator_group", 00:05:47.590 "iscsi_get_initiator_groups", 00:05:47.590 "nvmf_set_crdt", 00:05:47.590 "nvmf_set_config", 00:05:47.590 "nvmf_set_max_subsystems", 00:05:47.590 "nvmf_stop_mdns_prr", 00:05:47.590 "nvmf_publish_mdns_prr", 00:05:47.590 "nvmf_subsystem_get_listeners", 00:05:47.590 "nvmf_subsystem_get_qpairs", 00:05:47.590 "nvmf_subsystem_get_controllers", 00:05:47.590 "nvmf_get_stats", 00:05:47.590 "nvmf_get_transports", 00:05:47.590 "nvmf_create_transport", 00:05:47.590 "nvmf_get_targets", 00:05:47.590 "nvmf_delete_target", 00:05:47.590 "nvmf_create_target", 00:05:47.590 "nvmf_subsystem_allow_any_host", 00:05:47.590 "nvmf_subsystem_remove_host", 00:05:47.590 "nvmf_subsystem_add_host", 00:05:47.590 "nvmf_ns_remove_host", 00:05:47.590 "nvmf_ns_add_host", 00:05:47.590 "nvmf_subsystem_remove_ns", 00:05:47.590 "nvmf_subsystem_add_ns", 00:05:47.590 "nvmf_subsystem_listener_set_ana_state", 00:05:47.590 "nvmf_discovery_get_referrals", 00:05:47.590 "nvmf_discovery_remove_referral", 00:05:47.590 "nvmf_discovery_add_referral", 00:05:47.590 "nvmf_subsystem_remove_listener", 00:05:47.590 "nvmf_subsystem_add_listener", 00:05:47.590 "nvmf_delete_subsystem", 00:05:47.590 "nvmf_create_subsystem", 00:05:47.590 "nvmf_get_subsystems", 00:05:47.590 "env_dpdk_get_mem_stats", 00:05:47.590 "nbd_get_disks", 00:05:47.590 "nbd_stop_disk", 00:05:47.590 "nbd_start_disk", 00:05:47.590 "ublk_recover_disk", 00:05:47.590 "ublk_get_disks", 00:05:47.590 "ublk_stop_disk", 00:05:47.590 "ublk_start_disk", 00:05:47.590 "ublk_destroy_target", 00:05:47.590 "ublk_create_target", 00:05:47.590 "virtio_blk_create_transport", 00:05:47.590 "virtio_blk_get_transports", 00:05:47.590 "vhost_controller_set_coalescing", 00:05:47.590 "vhost_get_controllers", 00:05:47.590 "vhost_delete_controller", 00:05:47.590 "vhost_create_blk_controller", 00:05:47.590 "vhost_scsi_controller_remove_target", 00:05:47.590 "vhost_scsi_controller_add_target", 00:05:47.590 "vhost_start_scsi_controller", 00:05:47.591 "vhost_create_scsi_controller", 00:05:47.591 "thread_set_cpumask", 00:05:47.591 "framework_get_scheduler", 00:05:47.591 "framework_set_scheduler", 00:05:47.591 "framework_get_reactors", 00:05:47.591 "thread_get_io_channels", 00:05:47.591 "thread_get_pollers", 00:05:47.591 "thread_get_stats", 00:05:47.591 "framework_monitor_context_switch", 00:05:47.591 "spdk_kill_instance", 00:05:47.591 "log_enable_timestamps", 00:05:47.591 "log_get_flags", 00:05:47.591 "log_clear_flag", 00:05:47.591 "log_set_flag", 00:05:47.591 "log_get_level", 00:05:47.591 "log_set_level", 00:05:47.591 "log_get_print_level", 00:05:47.591 "log_set_print_level", 00:05:47.591 "framework_enable_cpumask_locks", 00:05:47.591 "framework_disable_cpumask_locks", 00:05:47.591 "framework_wait_init", 00:05:47.591 "framework_start_init", 00:05:47.591 "scsi_get_devices", 00:05:47.591 "bdev_get_histogram", 00:05:47.591 "bdev_enable_histogram", 00:05:47.591 "bdev_set_qos_limit", 00:05:47.591 "bdev_set_qd_sampling_period", 00:05:47.591 "bdev_get_bdevs", 00:05:47.591 "bdev_reset_iostat", 00:05:47.591 "bdev_get_iostat", 00:05:47.591 "bdev_examine", 00:05:47.591 "bdev_wait_for_examine", 00:05:47.591 "bdev_set_options", 00:05:47.591 "notify_get_notifications", 00:05:47.591 "notify_get_types", 00:05:47.591 "accel_get_stats", 00:05:47.591 "accel_set_options", 00:05:47.591 "accel_set_driver", 00:05:47.591 "accel_crypto_key_destroy", 00:05:47.591 "accel_crypto_keys_get", 00:05:47.591 "accel_crypto_key_create", 00:05:47.591 "accel_assign_opc", 00:05:47.591 "accel_get_module_info", 00:05:47.591 "accel_get_opc_assignments", 00:05:47.591 "vmd_rescan", 00:05:47.591 "vmd_remove_device", 00:05:47.591 "vmd_enable", 00:05:47.591 "sock_get_default_impl", 00:05:47.591 "sock_set_default_impl", 00:05:47.591 "sock_impl_set_options", 00:05:47.591 "sock_impl_get_options", 00:05:47.591 "iobuf_get_stats", 00:05:47.591 "iobuf_set_options", 00:05:47.591 "keyring_get_keys", 00:05:47.591 "framework_get_pci_devices", 00:05:47.591 "framework_get_config", 00:05:47.591 "framework_get_subsystems", 00:05:47.591 "vfu_tgt_set_base_path", 00:05:47.591 "trace_get_info", 00:05:47.591 "trace_get_tpoint_group_mask", 00:05:47.591 "trace_disable_tpoint_group", 00:05:47.591 "trace_enable_tpoint_group", 00:05:47.591 "trace_clear_tpoint_mask", 00:05:47.591 "trace_set_tpoint_mask", 00:05:47.591 "spdk_get_version", 00:05:47.591 "rpc_get_methods" 00:05:47.591 ] 00:05:47.591 14:08:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.591 14:08:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.591 14:08:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 293399 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 293399 ']' 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 293399 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 293399 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 293399' 00:05:47.591 killing process with pid 293399 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 293399 00:05:47.591 14:08:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 293399 00:05:47.852 00:05:47.852 real 0m1.361s 00:05:47.852 user 0m2.488s 00:05:47.852 sys 0m0.434s 00:05:47.852 14:08:11 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.852 14:08:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.852 ************************************ 00:05:47.852 END TEST spdkcli_tcp 00:05:47.852 ************************************ 00:05:47.852 14:08:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.852 14:08:11 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.852 14:08:11 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.852 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:05:47.852 ************************************ 00:05:47.852 START TEST dpdk_mem_utility 00:05:47.852 ************************************ 00:05:47.852 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.113 * Looking for test storage... 00:05:48.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.113 14:08:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.113 14:08:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=293806 00:05:48.113 14:08:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 293806 00:05:48.113 14:08:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 293806 ']' 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:48.113 14:08:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.113 [2024-06-07 14:08:11.589168] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:48.113 [2024-06-07 14:08:11.589244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid293806 ] 00:05:48.113 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.113 [2024-06-07 14:08:11.662271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.113 [2024-06-07 14:08:11.701014] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.056 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:49.056 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:05:49.056 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:49.056 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:49.056 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:49.056 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.056 { 00:05:49.056 "filename": "/tmp/spdk_mem_dump.txt" 00:05:49.056 } 00:05:49.056 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:49.056 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.056 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:49.056 1 heaps totaling size 814.000000 MiB 00:05:49.056 size: 814.000000 MiB heap id: 0 00:05:49.056 end heaps---------- 00:05:49.056 8 mempools totaling size 598.116089 MiB 00:05:49.056 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:49.056 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:49.056 size: 84.521057 MiB name: bdev_io_293806 00:05:49.056 size: 51.011292 MiB name: evtpool_293806 00:05:49.056 size: 50.003479 MiB name: msgpool_293806 00:05:49.056 size: 21.763794 MiB name: PDU_Pool 00:05:49.057 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:49.057 size: 0.026123 MiB name: Session_Pool 00:05:49.057 end mempools------- 00:05:49.057 6 memzones totaling size 4.142822 MiB 00:05:49.057 size: 1.000366 MiB name: RG_ring_0_293806 00:05:49.057 size: 1.000366 MiB name: RG_ring_1_293806 00:05:49.057 size: 1.000366 MiB name: RG_ring_4_293806 00:05:49.057 size: 1.000366 MiB name: RG_ring_5_293806 00:05:49.057 size: 0.125366 MiB name: RG_ring_2_293806 00:05:49.057 size: 0.015991 MiB name: RG_ring_3_293806 00:05:49.057 end memzones------- 00:05:49.057 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:49.057 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:49.057 list of free elements. size: 12.519348 MiB 00:05:49.057 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:49.057 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:49.057 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:49.057 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:49.057 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:49.057 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:49.057 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:49.057 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:49.057 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:49.057 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:49.057 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:49.057 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:49.057 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:49.057 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:49.057 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:49.057 list of standard malloc elements. size: 199.218079 MiB 00:05:49.057 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:49.057 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:49.057 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:49.057 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:49.057 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:49.057 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:49.057 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:49.057 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:49.057 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:49.057 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:49.057 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:49.057 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:49.057 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:49.057 list of memzone associated elements. size: 602.262573 MiB 00:05:49.057 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:49.057 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:49.057 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:49.057 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:49.057 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:49.057 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_293806_0 00:05:49.057 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:49.057 associated memzone info: size: 48.002930 MiB name: MP_evtpool_293806_0 00:05:49.057 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:49.057 associated memzone info: size: 48.002930 MiB name: MP_msgpool_293806_0 00:05:49.057 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:49.057 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:49.057 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:49.057 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:49.057 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:49.057 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_293806 00:05:49.057 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:49.057 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_293806 00:05:49.057 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:49.057 associated memzone info: size: 1.007996 MiB name: MP_evtpool_293806 00:05:49.057 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:49.057 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:49.057 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:49.057 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:49.057 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:49.057 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:49.057 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:49.057 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:49.057 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:49.057 associated memzone info: size: 1.000366 MiB name: RG_ring_0_293806 00:05:49.057 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:49.057 associated memzone info: size: 1.000366 MiB name: RG_ring_1_293806 00:05:49.057 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:49.057 associated memzone info: size: 1.000366 MiB name: RG_ring_4_293806 00:05:49.057 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:49.057 associated memzone info: size: 1.000366 MiB name: RG_ring_5_293806 00:05:49.057 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:49.057 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_293806 00:05:49.057 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:49.057 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:49.057 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:49.057 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:49.057 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:49.057 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:49.057 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:49.057 associated memzone info: size: 0.125366 MiB name: RG_ring_2_293806 00:05:49.057 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:49.057 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:49.057 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:49.057 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:49.057 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:49.057 associated memzone info: size: 0.015991 MiB name: RG_ring_3_293806 00:05:49.057 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:49.057 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:49.057 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:49.057 associated memzone info: size: 0.000183 MiB name: MP_msgpool_293806 00:05:49.057 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:49.057 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_293806 00:05:49.057 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:49.057 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:49.057 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:49.057 14:08:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 293806 00:05:49.057 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 293806 ']' 00:05:49.057 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 293806 00:05:49.057 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 293806 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 293806' 00:05:49.058 killing process with pid 293806 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 293806 00:05:49.058 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 293806 00:05:49.318 00:05:49.318 real 0m1.292s 00:05:49.318 user 0m1.389s 00:05:49.318 sys 0m0.366s 00:05:49.318 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.318 14:08:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.318 ************************************ 00:05:49.318 END TEST dpdk_mem_utility 00:05:49.319 ************************************ 00:05:49.319 14:08:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.319 14:08:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:49.319 14:08:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.319 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.319 ************************************ 00:05:49.319 START TEST event 00:05:49.319 ************************************ 00:05:49.319 14:08:12 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.319 * Looking for test storage... 00:05:49.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.319 14:08:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.319 14:08:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.319 14:08:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.319 14:08:12 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:49.319 14:08:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.319 14:08:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.319 ************************************ 00:05:49.319 START TEST event_perf 00:05:49.319 ************************************ 00:05:49.319 14:08:12 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.319 Running I/O for 1 seconds...[2024-06-07 14:08:12.956269] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:49.319 [2024-06-07 14:08:12.956363] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294192 ] 00:05:49.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.580 [2024-06-07 14:08:13.033336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.580 [2024-06-07 14:08:13.075646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.580 [2024-06-07 14:08:13.075763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.580 [2024-06-07 14:08:13.075920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.580 Running I/O for 1 seconds...[2024-06-07 14:08:13.075920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.523 00:05:50.523 lcore 0: 178915 00:05:50.523 lcore 1: 178913 00:05:50.523 lcore 2: 178913 00:05:50.523 lcore 3: 178916 00:05:50.523 done. 00:05:50.523 00:05:50.523 real 0m1.181s 00:05:50.523 user 0m4.082s 00:05:50.523 sys 0m0.096s 00:05:50.523 14:08:14 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.523 14:08:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.523 ************************************ 00:05:50.523 END TEST event_perf 00:05:50.523 ************************************ 00:05:50.523 14:08:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.523 14:08:14 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:50.523 14:08:14 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.523 14:08:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.782 ************************************ 00:05:50.782 START TEST event_reactor 00:05:50.782 ************************************ 00:05:50.782 14:08:14 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.782 [2024-06-07 14:08:14.212594] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:50.782 [2024-06-07 14:08:14.212679] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294453 ] 00:05:50.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.782 [2024-06-07 14:08:14.281429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.782 [2024-06-07 14:08:14.313417] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.722 test_start 00:05:51.722 oneshot 00:05:51.722 tick 100 00:05:51.722 tick 100 00:05:51.722 tick 250 00:05:51.722 tick 100 00:05:51.722 tick 100 00:05:51.722 tick 250 00:05:51.722 tick 100 00:05:51.722 tick 500 00:05:51.722 tick 100 00:05:51.722 tick 100 00:05:51.722 tick 250 00:05:51.722 tick 100 00:05:51.722 tick 100 00:05:51.722 test_end 00:05:51.722 00:05:51.722 real 0m1.159s 00:05:51.722 user 0m1.083s 00:05:51.722 sys 0m0.073s 00:05:51.722 14:08:15 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:51.722 14:08:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:51.722 ************************************ 00:05:51.722 END TEST event_reactor 00:05:51.722 ************************************ 00:05:51.982 14:08:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.982 14:08:15 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:05:51.982 14:08:15 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:51.982 14:08:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.982 ************************************ 00:05:51.982 START TEST event_reactor_perf 00:05:51.982 ************************************ 00:05:51.982 14:08:15 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.982 [2024-06-07 14:08:15.448422] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:51.982 [2024-06-07 14:08:15.448506] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294593 ] 00:05:51.982 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.982 [2024-06-07 14:08:15.520553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.982 [2024-06-07 14:08:15.557017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.361 test_start 00:05:53.361 test_end 00:05:53.361 Performance: 370413 events per second 00:05:53.361 00:05:53.361 real 0m1.167s 00:05:53.361 user 0m1.084s 00:05:53.361 sys 0m0.078s 00:05:53.361 14:08:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.361 14:08:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.361 ************************************ 00:05:53.361 END TEST event_reactor_perf 00:05:53.361 ************************************ 00:05:53.361 14:08:16 event -- event/event.sh@49 -- # uname -s 00:05:53.361 14:08:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.361 14:08:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.361 14:08:16 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:53.361 14:08:16 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.361 14:08:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.361 ************************************ 00:05:53.361 START TEST event_scheduler 00:05:53.361 ************************************ 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.361 * Looking for test storage... 00:05:53.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.361 14:08:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.361 14:08:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=294963 00:05:53.361 14:08:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.361 14:08:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.361 14:08:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 294963 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 294963 ']' 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:53.361 14:08:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.361 [2024-06-07 14:08:16.835634] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:53.361 [2024-06-07 14:08:16.835717] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid294963 ] 00:05:53.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.361 [2024-06-07 14:08:16.898345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.361 [2024-06-07 14:08:16.937612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.361 [2024-06-07 14:08:16.937759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.361 [2024-06-07 14:08:16.937882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.361 [2024-06-07 14:08:16.937884] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:54.301 14:08:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 POWER: Env isn't set yet! 00:05:54.301 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:54.301 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.301 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.301 POWER: Attempting to initialise PSTAT power management... 00:05:54.301 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:54.301 POWER: Initialized successfully for lcore 0 power management 00:05:54.301 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:54.301 POWER: Initialized successfully for lcore 1 power management 00:05:54.301 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:54.301 POWER: Initialized successfully for lcore 2 power management 00:05:54.301 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:54.301 POWER: Initialized successfully for lcore 3 power management 00:05:54.301 [2024-06-07 14:08:17.657720] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.301 [2024-06-07 14:08:17.657731] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.301 [2024-06-07 14:08:17.657737] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.301 14:08:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 [2024-06-07 14:08:17.706725] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.301 14:08:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 ************************************ 00:05:54.301 START TEST scheduler_create_thread 00:05:54.301 ************************************ 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 2 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 3 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.301 4 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.301 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 5 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 6 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 7 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 8 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.302 9 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:54.302 14:08:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.684 10 00:05:55.684 14:08:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:55.684 14:08:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.684 14:08:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:55.684 14:08:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.064 14:08:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.064 14:08:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.064 14:08:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.064 14:08:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.064 14:08:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.634 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.634 14:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.634 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.634 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.574 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:58.574 14:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.574 14:08:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.574 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:58.574 14:08:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.146 14:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:59.146 00:05:59.146 real 0m4.798s 00:05:59.146 user 0m0.027s 00:05:59.146 sys 0m0.003s 00:05:59.146 14:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.146 14:08:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.146 ************************************ 00:05:59.146 END TEST scheduler_create_thread 00:05:59.146 ************************************ 00:05:59.146 14:08:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.146 14:08:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 294963 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 294963 ']' 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 294963 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 294963 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 294963' 00:05:59.146 killing process with pid 294963 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 294963 00:05:59.146 14:08:22 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 294963 00:05:59.440 [2024-06-07 14:08:22.820601] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.440 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:59.440 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:59.440 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:59.440 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:59.440 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:59.440 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:59.440 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:59.440 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:59.440 00:05:59.440 real 0m6.298s 00:05:59.440 user 0m14.172s 00:05:59.440 sys 0m0.364s 00:05:59.440 14:08:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.440 14:08:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 END TEST event_scheduler 00:05:59.440 ************************************ 00:05:59.440 14:08:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.440 14:08:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.440 14:08:23 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:59.440 14:08:23 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.440 14:08:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 ************************************ 00:05:59.440 START TEST app_repeat 00:05:59.440 ************************************ 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=296352 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 296352' 00:05:59.440 Process app_repeat pid: 296352 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.440 spdk_app_start Round 0 00:05:59.440 14:08:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 296352 /var/tmp/spdk-nbd.sock 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 296352 ']' 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:59.440 14:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.701 [2024-06-07 14:08:23.095501] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:05:59.701 [2024-06-07 14:08:23.095569] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid296352 ] 00:05:59.701 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.701 [2024-06-07 14:08:23.162263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.701 [2024-06-07 14:08:23.194227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.701 [2024-06-07 14:08:23.194256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.701 14:08:23 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:59.701 14:08:23 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:59.701 14:08:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.960 Malloc0 00:05:59.960 14:08:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.960 Malloc1 00:05:59.960 14:08:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.960 14:08:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.221 /dev/nbd0 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.221 1+0 records in 00:06:00.221 1+0 records out 00:06:00.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268487 s, 15.3 MB/s 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:00.221 14:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.221 14:08:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.481 /dev/nbd1 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.481 1+0 records in 00:06:00.481 1+0 records out 00:06:00.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002557 s, 16.0 MB/s 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:00.481 14:08:23 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.481 14:08:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.742 { 00:06:00.742 "nbd_device": "/dev/nbd0", 00:06:00.742 "bdev_name": "Malloc0" 00:06:00.742 }, 00:06:00.742 { 00:06:00.742 "nbd_device": "/dev/nbd1", 00:06:00.742 "bdev_name": "Malloc1" 00:06:00.742 } 00:06:00.742 ]' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.742 { 00:06:00.742 "nbd_device": "/dev/nbd0", 00:06:00.742 "bdev_name": "Malloc0" 00:06:00.742 }, 00:06:00.742 { 00:06:00.742 "nbd_device": "/dev/nbd1", 00:06:00.742 "bdev_name": "Malloc1" 00:06:00.742 } 00:06:00.742 ]' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.742 /dev/nbd1' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.742 /dev/nbd1' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.742 256+0 records in 00:06:00.742 256+0 records out 00:06:00.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120623 s, 86.9 MB/s 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.742 256+0 records in 00:06:00.742 256+0 records out 00:06:00.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0392729 s, 26.7 MB/s 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.742 256+0 records in 00:06:00.742 256+0 records out 00:06:00.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0415875 s, 25.2 MB/s 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.742 14:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.001 14:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.001 14:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.002 14:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.261 14:08:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.261 14:08:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.520 14:08:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.520 [2024-06-07 14:08:25.135229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.520 [2024-06-07 14:08:25.165341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.520 [2024-06-07 14:08:25.165344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.779 [2024-06-07 14:08:25.196520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.779 [2024-06-07 14:08:25.196555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.073 spdk_app_start Round 1 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 296352 /var/tmp/spdk-nbd.sock 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 296352 ']' 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.073 Malloc0 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.073 Malloc1 00:06:05.073 14:08:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.073 /dev/nbd0 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.073 1+0 records in 00:06:05.073 1+0 records out 00:06:05.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024353 s, 16.8 MB/s 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:05.073 14:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.073 14:08:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.334 /dev/nbd1 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.334 1+0 records in 00:06:05.334 1+0 records out 00:06:05.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292496 s, 14.0 MB/s 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:05.334 14:08:28 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.334 14:08:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.596 { 00:06:05.596 "nbd_device": "/dev/nbd0", 00:06:05.596 "bdev_name": "Malloc0" 00:06:05.596 }, 00:06:05.596 { 00:06:05.596 "nbd_device": "/dev/nbd1", 00:06:05.596 "bdev_name": "Malloc1" 00:06:05.596 } 00:06:05.596 ]' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.596 { 00:06:05.596 "nbd_device": "/dev/nbd0", 00:06:05.596 "bdev_name": "Malloc0" 00:06:05.596 }, 00:06:05.596 { 00:06:05.596 "nbd_device": "/dev/nbd1", 00:06:05.596 "bdev_name": "Malloc1" 00:06:05.596 } 00:06:05.596 ]' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.596 /dev/nbd1' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.596 /dev/nbd1' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.596 256+0 records in 00:06:05.596 256+0 records out 00:06:05.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121254 s, 86.5 MB/s 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.596 256+0 records in 00:06:05.596 256+0 records out 00:06:05.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168703 s, 62.2 MB/s 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.596 256+0 records in 00:06:05.596 256+0 records out 00:06:05.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167821 s, 62.5 MB/s 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.596 14:08:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.858 14:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.119 14:08:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.119 14:08:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.380 14:08:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.380 [2024-06-07 14:08:29.956170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.380 [2024-06-07 14:08:29.986002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.380 [2024-06-07 14:08:29.986004] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.380 [2024-06-07 14:08:30.018520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.380 [2024-06-07 14:08:30.018559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.679 14:08:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.679 14:08:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:09.679 spdk_app_start Round 2 00:06:09.679 14:08:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 296352 /var/tmp/spdk-nbd.sock 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 296352 ']' 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:09.679 14:08:32 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:09.679 14:08:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.679 Malloc0 00:06:09.679 14:08:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.679 Malloc1 00:06:09.679 14:08:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.679 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.680 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.680 14:08:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.940 /dev/nbd0 00:06:09.940 14:08:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.940 14:08:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.940 1+0 records in 00:06:09.940 1+0 records out 00:06:09.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205314 s, 19.9 MB/s 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:09.940 14:08:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:09.940 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.940 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.940 14:08:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.200 /dev/nbd1 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.200 1+0 records in 00:06:10.200 1+0 records out 00:06:10.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278681 s, 14.7 MB/s 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:10.200 14:08:33 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.200 14:08:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.201 14:08:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.201 14:08:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.201 { 00:06:10.201 "nbd_device": "/dev/nbd0", 00:06:10.201 "bdev_name": "Malloc0" 00:06:10.201 }, 00:06:10.201 { 00:06:10.201 "nbd_device": "/dev/nbd1", 00:06:10.201 "bdev_name": "Malloc1" 00:06:10.201 } 00:06:10.201 ]' 00:06:10.201 14:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.201 { 00:06:10.201 "nbd_device": "/dev/nbd0", 00:06:10.201 "bdev_name": "Malloc0" 00:06:10.201 }, 00:06:10.201 { 00:06:10.201 "nbd_device": "/dev/nbd1", 00:06:10.201 "bdev_name": "Malloc1" 00:06:10.201 } 00:06:10.201 ]' 00:06:10.201 14:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.461 /dev/nbd1' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.461 /dev/nbd1' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.461 256+0 records in 00:06:10.461 256+0 records out 00:06:10.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124209 s, 84.4 MB/s 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.461 256+0 records in 00:06:10.461 256+0 records out 00:06:10.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158936 s, 66.0 MB/s 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.461 256+0 records in 00:06:10.461 256+0 records out 00:06:10.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175149 s, 59.9 MB/s 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.461 14:08:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.722 14:08:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.984 14:08:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.984 14:08:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.245 14:08:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.245 [2024-06-07 14:08:34.778840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.245 [2024-06-07 14:08:34.809106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.245 [2024-06-07 14:08:34.809108] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.245 [2024-06-07 14:08:34.840321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.245 [2024-06-07 14:08:34.840357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.548 14:08:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 296352 /var/tmp/spdk-nbd.sock 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 296352 ']' 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:14.548 14:08:37 event.app_repeat -- event/event.sh@39 -- # killprocess 296352 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 296352 ']' 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 296352 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 296352 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 296352' 00:06:14.548 killing process with pid 296352 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@968 -- # kill 296352 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@973 -- # wait 296352 00:06:14.548 spdk_app_start is called in Round 0. 00:06:14.548 Shutdown signal received, stop current app iteration 00:06:14.548 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 reinitialization... 00:06:14.548 spdk_app_start is called in Round 1. 00:06:14.548 Shutdown signal received, stop current app iteration 00:06:14.548 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 reinitialization... 00:06:14.548 spdk_app_start is called in Round 2. 00:06:14.548 Shutdown signal received, stop current app iteration 00:06:14.548 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 reinitialization... 00:06:14.548 spdk_app_start is called in Round 3. 00:06:14.548 Shutdown signal received, stop current app iteration 00:06:14.548 14:08:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:14.548 14:08:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:14.548 00:06:14.548 real 0m14.911s 00:06:14.548 user 0m32.396s 00:06:14.548 sys 0m2.064s 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.548 14:08:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.548 ************************************ 00:06:14.548 END TEST app_repeat 00:06:14.548 ************************************ 00:06:14.548 14:08:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:14.548 14:08:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:14.548 14:08:38 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.548 14:08:38 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.548 14:08:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.548 ************************************ 00:06:14.548 START TEST cpu_locks 00:06:14.548 ************************************ 00:06:14.548 14:08:38 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:14.548 * Looking for test storage... 00:06:14.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:14.548 14:08:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.548 14:08:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.548 14:08:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.548 14:08:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.548 14:08:38 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.548 14:08:38 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.548 14:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.548 ************************************ 00:06:14.548 START TEST default_locks 00:06:14.548 ************************************ 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=299618 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 299618 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 299618 ']' 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.548 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.809 [2024-06-07 14:08:38.224667] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:14.809 [2024-06-07 14:08:38.224727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid299618 ] 00:06:14.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.809 [2024-06-07 14:08:38.292757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.809 [2024-06-07 14:08:38.325643] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.070 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.070 14:08:38 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:15.070 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 299618 00:06:15.070 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 299618 00:06:15.070 14:08:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.641 lslocks: write error 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 299618 ']' 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 299618' 00:06:15.641 killing process with pid 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 299618 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 299618 ']' 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (299618) - No such process 00:06:15.641 ERROR: process (pid: 299618) is no longer running 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:15.641 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.642 00:06:15.642 real 0m1.083s 00:06:15.642 user 0m1.070s 00:06:15.642 sys 0m0.517s 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.642 14:08:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.642 ************************************ 00:06:15.642 END TEST default_locks 00:06:15.642 ************************************ 00:06:15.902 14:08:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.902 14:08:39 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.902 14:08:39 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.902 14:08:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.902 ************************************ 00:06:15.902 START TEST default_locks_via_rpc 00:06:15.902 ************************************ 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=299748 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 299748 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 299748 ']' 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:15.902 14:08:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.902 [2024-06-07 14:08:39.399815] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:15.902 [2024-06-07 14:08:39.399872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid299748 ] 00:06:15.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.902 [2024-06-07 14:08:39.470171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.902 [2024-06-07 14:08:39.506672] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.845 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.845 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 299748 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 299748 00:06:16.846 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 299748 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 299748 ']' 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 299748 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 299748 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 299748' 00:06:17.106 killing process with pid 299748 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 299748 00:06:17.106 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 299748 00:06:17.367 00:06:17.368 real 0m1.552s 00:06:17.368 user 0m1.652s 00:06:17.368 sys 0m0.517s 00:06:17.368 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.368 14:08:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.368 ************************************ 00:06:17.368 END TEST default_locks_via_rpc 00:06:17.368 ************************************ 00:06:17.368 14:08:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.368 14:08:40 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.368 14:08:40 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.368 14:08:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.368 ************************************ 00:06:17.368 START TEST non_locking_app_on_locked_coremask 00:06:17.368 ************************************ 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=300059 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 300059 /var/tmp/spdk.sock 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 300059 ']' 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:17.368 14:08:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.628 [2024-06-07 14:08:41.023965] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:17.628 [2024-06-07 14:08:41.024015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300059 ] 00:06:17.628 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.628 [2024-06-07 14:08:41.091500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.628 [2024-06-07 14:08:41.128405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.197 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=300352 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 300352 /var/tmp/spdk2.sock 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 300352 ']' 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.198 14:08:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.198 [2024-06-07 14:08:41.831878] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:18.198 [2024-06-07 14:08:41.831932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300352 ] 00:06:18.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.458 [2024-06-07 14:08:41.930209] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.458 [2024-06-07 14:08:41.930236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.458 [2024-06-07 14:08:41.993855] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.030 14:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.030 14:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:19.030 14:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 300059 00:06:19.030 14:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.030 14:08:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 300059 00:06:19.603 lslocks: write error 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 300059 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 300059 ']' 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 300059 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:19.603 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 300059 00:06:19.902 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:19.902 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:19.902 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 300059' 00:06:19.902 killing process with pid 300059 00:06:19.902 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 300059 00:06:19.902 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 300059 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 300352 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 300352 ']' 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 300352 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 300352 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 300352' 00:06:20.164 killing process with pid 300352 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 300352 00:06:20.164 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 300352 00:06:20.426 00:06:20.426 real 0m2.932s 00:06:20.426 user 0m3.182s 00:06:20.426 sys 0m0.872s 00:06:20.426 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.426 14:08:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.426 ************************************ 00:06:20.426 END TEST non_locking_app_on_locked_coremask 00:06:20.426 ************************************ 00:06:20.426 14:08:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.426 14:08:43 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:20.426 14:08:43 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.426 14:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.426 ************************************ 00:06:20.426 START TEST locking_app_on_unlocked_coremask 00:06:20.426 ************************************ 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=300732 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 300732 /var/tmp/spdk.sock 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 300732 ']' 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:20.426 14:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.426 [2024-06-07 14:08:44.030788] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:20.426 [2024-06-07 14:08:44.030841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300732 ] 00:06:20.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.687 [2024-06-07 14:08:44.099427] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.687 [2024-06-07 14:08:44.099458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.687 [2024-06-07 14:08:44.131443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=301023 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 301023 /var/tmp/spdk2.sock 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 301023 ']' 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.258 14:08:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.258 [2024-06-07 14:08:44.855037] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:21.258 [2024-06-07 14:08:44.855090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301023 ] 00:06:21.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.518 [2024-06-07 14:08:44.954532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.518 [2024-06-07 14:08:45.017670] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.088 14:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.088 14:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:22.088 14:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 301023 00:06:22.088 14:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 301023 00:06:22.088 14:08:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.659 lslocks: write error 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 300732 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 300732 ']' 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 300732 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 300732 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 300732' 00:06:22.659 killing process with pid 300732 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 300732 00:06:22.659 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 300732 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 301023 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 301023 ']' 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 301023 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 301023 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 301023' 00:06:22.921 killing process with pid 301023 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 301023 00:06:22.921 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 301023 00:06:23.182 00:06:23.182 real 0m2.773s 00:06:23.182 user 0m3.022s 00:06:23.182 sys 0m0.851s 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.182 ************************************ 00:06:23.182 END TEST locking_app_on_unlocked_coremask 00:06:23.182 ************************************ 00:06:23.182 14:08:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.182 14:08:46 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:23.182 14:08:46 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:23.182 14:08:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.182 ************************************ 00:06:23.182 START TEST locking_app_on_locked_coremask 00:06:23.182 ************************************ 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=301438 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 301438 /var/tmp/spdk.sock 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 301438 ']' 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:23.182 14:08:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.444 [2024-06-07 14:08:46.881331] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:23.444 [2024-06-07 14:08:46.881382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301438 ] 00:06:23.444 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.444 [2024-06-07 14:08:46.946188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.444 [2024-06-07 14:08:46.980008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=301467 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 301467 /var/tmp/spdk2.sock 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 301467 /var/tmp/spdk2.sock 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 301467 /var/tmp/spdk2.sock 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 301467 ']' 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:24.014 14:08:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.275 [2024-06-07 14:08:47.686585] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:24.275 [2024-06-07 14:08:47.686640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301467 ] 00:06:24.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.275 [2024-06-07 14:08:47.785207] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 301438 has claimed it. 00:06:24.275 [2024-06-07 14:08:47.785250] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:24.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (301467) - No such process 00:06:24.846 ERROR: process (pid: 301467) is no longer running 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 301438 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 301438 00:06:24.846 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.106 lslocks: write error 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 301438 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 301438 ']' 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 301438 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.106 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 301438 00:06:25.107 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 301438' 00:06:25.367 killing process with pid 301438 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 301438 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 301438 00:06:25.367 00:06:25.367 real 0m2.125s 00:06:25.367 user 0m2.356s 00:06:25.367 sys 0m0.594s 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.367 14:08:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.367 ************************************ 00:06:25.367 END TEST locking_app_on_locked_coremask 00:06:25.367 ************************************ 00:06:25.367 14:08:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.367 14:08:48 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:25.367 14:08:48 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:25.367 14:08:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.628 ************************************ 00:06:25.628 START TEST locking_overlapped_coremask 00:06:25.628 ************************************ 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=301817 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 301817 /var/tmp/spdk.sock 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 301817 ']' 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.628 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.629 [2024-06-07 14:08:49.074761] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:25.629 [2024-06-07 14:08:49.074811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid301817 ] 00:06:25.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.629 [2024-06-07 14:08:49.141979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.629 [2024-06-07 14:08:49.181918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.629 [2024-06-07 14:08:49.182041] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.629 [2024-06-07 14:08:49.182043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=302075 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 302075 /var/tmp/spdk2.sock 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 302075 /var/tmp/spdk2.sock 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 302075 /var/tmp/spdk2.sock 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 302075 ']' 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.570 14:08:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.570 [2024-06-07 14:08:49.904004] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:26.570 [2024-06-07 14:08:49.904055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302075 ] 00:06:26.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.570 [2024-06-07 14:08:49.983956] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 301817 has claimed it. 00:06:26.570 [2024-06-07 14:08:49.983986] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (302075) - No such process 00:06:27.143 ERROR: process (pid: 302075) is no longer running 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 301817 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 301817 ']' 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 301817 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 301817 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 301817' 00:06:27.143 killing process with pid 301817 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 301817 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 301817 00:06:27.143 00:06:27.143 real 0m1.741s 00:06:27.143 user 0m4.975s 00:06:27.143 sys 0m0.399s 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.143 14:08:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.143 ************************************ 00:06:27.143 END TEST locking_overlapped_coremask 00:06:27.143 ************************************ 00:06:27.403 14:08:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.403 14:08:50 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:27.403 14:08:50 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.403 14:08:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.403 ************************************ 00:06:27.403 START TEST locking_overlapped_coremask_via_rpc 00:06:27.403 ************************************ 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=302191 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 302191 /var/tmp/spdk.sock 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 302191 ']' 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:27.403 14:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.403 [2024-06-07 14:08:50.889297] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:27.403 [2024-06-07 14:08:50.889348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302191 ] 00:06:27.403 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.403 [2024-06-07 14:08:50.956131] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.403 [2024-06-07 14:08:50.956161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.403 [2024-06-07 14:08:50.994730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.403 [2024-06-07 14:08:50.994852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.403 [2024-06-07 14:08:50.994854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=302511 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 302511 /var/tmp/spdk2.sock 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 302511 ']' 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:28.345 14:08:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.345 [2024-06-07 14:08:51.701338] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:28.345 [2024-06-07 14:08:51.701394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302511 ] 00:06:28.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.345 [2024-06-07 14:08:51.781850] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.345 [2024-06-07 14:08:51.781874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.345 [2024-06-07 14:08:51.839261] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.345 [2024-06-07 14:08:51.839330] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.345 [2024-06-07 14:08:51.839332] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.918 [2024-06-07 14:08:52.478251] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 302191 has claimed it. 00:06:28.918 request: 00:06:28.918 { 00:06:28.918 "method": "framework_enable_cpumask_locks", 00:06:28.918 "req_id": 1 00:06:28.918 } 00:06:28.918 Got JSON-RPC error response 00:06:28.918 response: 00:06:28.918 { 00:06:28.918 "code": -32603, 00:06:28.918 "message": "Failed to claim CPU core: 2" 00:06:28.918 } 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 302191 /var/tmp/spdk.sock 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 302191 ']' 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:28.918 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 302511 /var/tmp/spdk2.sock 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 302511 ']' 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.179 00:06:29.179 real 0m1.982s 00:06:29.179 user 0m0.757s 00:06:29.179 sys 0m0.149s 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.179 14:08:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.179 ************************************ 00:06:29.179 END TEST locking_overlapped_coremask_via_rpc 00:06:29.179 ************************************ 00:06:29.439 14:08:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:29.439 14:08:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 302191 ]] 00:06:29.439 14:08:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 302191 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 302191 ']' 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 302191 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 302191 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 302191' 00:06:29.439 killing process with pid 302191 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 302191 00:06:29.439 14:08:52 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 302191 00:06:29.698 14:08:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 302511 ]] 00:06:29.698 14:08:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 302511 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 302511 ']' 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 302511 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 302511 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 302511' 00:06:29.698 killing process with pid 302511 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 302511 00:06:29.698 14:08:53 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 302511 00:06:29.698 14:08:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.958 14:08:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:29.958 14:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 302191 ]] 00:06:29.958 14:08:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 302191 00:06:29.958 14:08:53 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 302191 ']' 00:06:29.958 14:08:53 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 302191 00:06:29.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (302191) - No such process 00:06:29.958 14:08:53 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 302191 is not found' 00:06:29.959 Process with pid 302191 is not found 00:06:29.959 14:08:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 302511 ]] 00:06:29.959 14:08:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 302511 00:06:29.959 14:08:53 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 302511 ']' 00:06:29.959 14:08:53 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 302511 00:06:29.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (302511) - No such process 00:06:29.959 14:08:53 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 302511 is not found' 00:06:29.959 Process with pid 302511 is not found 00:06:29.959 14:08:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:29.959 00:06:29.959 real 0m15.308s 00:06:29.959 user 0m26.550s 00:06:29.959 sys 0m4.787s 00:06:29.959 14:08:53 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.959 14:08:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.959 ************************************ 00:06:29.959 END TEST cpu_locks 00:06:29.959 ************************************ 00:06:29.959 00:06:29.959 real 0m40.597s 00:06:29.959 user 1m19.575s 00:06:29.959 sys 0m7.858s 00:06:29.959 14:08:53 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:29.959 14:08:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.959 ************************************ 00:06:29.959 END TEST event 00:06:29.959 ************************************ 00:06:29.959 14:08:53 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.959 14:08:53 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:29.959 14:08:53 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.959 14:08:53 -- common/autotest_common.sh@10 -- # set +x 00:06:29.959 ************************************ 00:06:29.959 START TEST thread 00:06:29.959 ************************************ 00:06:29.959 14:08:53 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:29.959 * Looking for test storage... 00:06:29.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:29.959 14:08:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:29.959 14:08:53 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:29.959 14:08:53 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:29.959 14:08:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.959 ************************************ 00:06:29.959 START TEST thread_poller_perf 00:06:29.959 ************************************ 00:06:29.959 14:08:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.219 [2024-06-07 14:08:53.622413] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:30.219 [2024-06-07 14:08:53.622492] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302962 ] 00:06:30.220 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.220 [2024-06-07 14:08:53.700399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.220 [2024-06-07 14:08:53.734568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.220 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:31.160 ====================================== 00:06:31.160 busy:2409653968 (cyc) 00:06:31.160 total_run_count: 287000 00:06:31.160 tsc_hz: 2400000000 (cyc) 00:06:31.160 ====================================== 00:06:31.160 poller_cost: 8396 (cyc), 3498 (nsec) 00:06:31.160 00:06:31.160 real 0m1.180s 00:06:31.160 user 0m1.092s 00:06:31.160 sys 0m0.084s 00:06:31.160 14:08:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:31.160 14:08:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:31.160 ************************************ 00:06:31.160 END TEST thread_poller_perf 00:06:31.160 ************************************ 00:06:31.420 14:08:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.420 14:08:54 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:31.420 14:08:54 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:31.420 14:08:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.420 ************************************ 00:06:31.420 START TEST thread_poller_perf 00:06:31.420 ************************************ 00:06:31.420 14:08:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:31.420 [2024-06-07 14:08:54.879645] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:31.420 [2024-06-07 14:08:54.879726] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303213 ] 00:06:31.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.420 [2024-06-07 14:08:54.950842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.420 [2024-06-07 14:08:54.986464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.420 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.803 ====================================== 00:06:32.803 busy:2402077846 (cyc) 00:06:32.803 total_run_count: 3815000 00:06:32.803 tsc_hz: 2400000000 (cyc) 00:06:32.803 ====================================== 00:06:32.803 poller_cost: 629 (cyc), 262 (nsec) 00:06:32.803 00:06:32.803 real 0m1.167s 00:06:32.803 user 0m1.094s 00:06:32.803 sys 0m0.069s 00:06:32.803 14:08:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.803 14:08:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.803 ************************************ 00:06:32.803 END TEST thread_poller_perf 00:06:32.803 ************************************ 00:06:32.803 14:08:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:32.803 00:06:32.803 real 0m2.594s 00:06:32.803 user 0m2.268s 00:06:32.803 sys 0m0.332s 00:06:32.803 14:08:56 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:32.803 14:08:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.803 ************************************ 00:06:32.803 END TEST thread 00:06:32.803 ************************************ 00:06:32.803 14:08:56 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.803 14:08:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:32.803 14:08:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:32.803 14:08:56 -- common/autotest_common.sh@10 -- # set +x 00:06:32.803 ************************************ 00:06:32.803 START TEST accel 00:06:32.803 ************************************ 00:06:32.803 14:08:56 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:32.804 * Looking for test storage... 00:06:32.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.804 14:08:56 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:32.804 14:08:56 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:32.804 14:08:56 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.804 14:08:56 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=303445 00:06:32.804 14:08:56 accel -- accel/accel.sh@63 -- # waitforlisten 303445 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@830 -- # '[' -z 303445 ']' 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.804 14:08:56 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:32.804 14:08:56 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:32.804 14:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 14:08:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.804 14:08:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.804 14:08:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.804 14:08:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.804 14:08:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.804 14:08:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.804 14:08:56 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.804 [2024-06-07 14:08:56.307225] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:32.804 [2024-06-07 14:08:56.307295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303445 ] 00:06:32.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.804 [2024-06-07 14:08:56.377831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.804 [2024-06-07 14:08:56.418391] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.745 14:08:57 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:33.745 14:08:57 accel -- common/autotest_common.sh@863 -- # return 0 00:06:33.745 14:08:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:33.745 14:08:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:33.746 14:08:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:33.746 14:08:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:33.746 14:08:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:33.746 14:08:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:33.746 14:08:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:33.746 14:08:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:33.746 14:08:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:33.746 14:08:57 accel -- accel/accel.sh@75 -- # killprocess 303445 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@949 -- # '[' -z 303445 ']' 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@953 -- # kill -0 303445 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@954 -- # uname 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 303445 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 303445' 00:06:33.746 killing process with pid 303445 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@968 -- # kill 303445 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@973 -- # wait 303445 00:06:33.746 14:08:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:33.746 14:08:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:33.746 14:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 14:08:57 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:34.008 14:08:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:34.008 14:08:57 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.008 14:08:57 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 14:08:57 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.008 14:08:57 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:34.008 14:08:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.008 14:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.008 ************************************ 00:06:34.008 START TEST accel_missing_filename 00:06:34.008 ************************************ 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.008 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:34.008 14:08:57 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:34.008 [2024-06-07 14:08:57.541938] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:34.008 [2024-06-07 14:08:57.542014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303759 ] 00:06:34.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.008 [2024-06-07 14:08:57.610250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.008 [2024-06-07 14:08:57.640865] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.269 [2024-06-07 14:08:57.672485] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.269 [2024-06-07 14:08:57.709402] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:34.269 A filename is required. 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:34.269 00:06:34.269 real 0m0.237s 00:06:34.269 user 0m0.163s 00:06:34.269 sys 0m0.114s 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.269 14:08:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:34.269 ************************************ 00:06:34.269 END TEST accel_missing_filename 00:06:34.269 ************************************ 00:06:34.269 14:08:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.269 14:08:57 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:34.269 14:08:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.269 14:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.269 ************************************ 00:06:34.269 START TEST accel_compress_verify 00:06:34.269 ************************************ 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.269 14:08:57 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:34.269 14:08:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:34.269 [2024-06-07 14:08:57.855153] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:34.269 [2024-06-07 14:08:57.855241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid303795 ] 00:06:34.269 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.530 [2024-06-07 14:08:57.925681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.530 [2024-06-07 14:08:57.961492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.530 [2024-06-07 14:08:57.994156] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.530 [2024-06-07 14:08:58.031373] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:34.530 00:06:34.530 Compression does not support the verify option, aborting. 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:34.530 00:06:34.530 real 0m0.248s 00:06:34.530 user 0m0.181s 00:06:34.530 sys 0m0.108s 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.530 14:08:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 ************************************ 00:06:34.530 END TEST accel_compress_verify 00:06:34.530 ************************************ 00:06:34.530 14:08:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:34.530 14:08:58 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:34.530 14:08:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.530 14:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.530 ************************************ 00:06:34.530 START TEST accel_wrong_workload 00:06:34.530 ************************************ 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.530 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:34.530 14:08:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:34.530 Unsupported workload type: foobar 00:06:34.530 [2024-06-07 14:08:58.173648] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:34.791 accel_perf options: 00:06:34.791 [-h help message] 00:06:34.791 [-q queue depth per core] 00:06:34.791 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.791 [-T number of threads per core 00:06:34.791 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.791 [-t time in seconds] 00:06:34.791 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.791 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.791 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.791 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.791 [-S for crc32c workload, use this seed value (default 0) 00:06:34.791 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.791 [-f for fill workload, use this BYTE value (default 255) 00:06:34.791 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.791 [-y verify result if this switch is on] 00:06:34.791 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.791 Can be used to spread operations across a wider range of memory. 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:34.791 00:06:34.791 real 0m0.035s 00:06:34.791 user 0m0.021s 00:06:34.791 sys 0m0.013s 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.791 14:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 END TEST accel_wrong_workload 00:06:34.791 ************************************ 00:06:34.791 Error: writing output failed: Broken pipe 00:06:34.791 14:08:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 START TEST accel_negative_buffers 00:06:34.791 ************************************ 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:34.791 14:08:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:34.791 -x option must be non-negative. 00:06:34.791 [2024-06-07 14:08:58.282340] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:34.791 accel_perf options: 00:06:34.791 [-h help message] 00:06:34.791 [-q queue depth per core] 00:06:34.791 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.791 [-T number of threads per core 00:06:34.791 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.791 [-t time in seconds] 00:06:34.791 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.791 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:34.791 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.791 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.791 [-S for crc32c workload, use this seed value (default 0) 00:06:34.791 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.791 [-f for fill workload, use this BYTE value (default 255) 00:06:34.791 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.791 [-y verify result if this switch is on] 00:06:34.791 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.791 Can be used to spread operations across a wider range of memory. 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:34.791 00:06:34.791 real 0m0.034s 00:06:34.791 user 0m0.016s 00:06:34.791 sys 0m0.018s 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.791 14:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 END TEST accel_negative_buffers 00:06:34.791 ************************************ 00:06:34.791 Error: writing output failed: Broken pipe 00:06:34.791 14:08:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.791 14:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 START TEST accel_crc32c 00:06:34.792 ************************************ 00:06:34.792 14:08:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:34.792 14:08:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:34.792 [2024-06-07 14:08:58.392102] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:34.792 [2024-06-07 14:08:58.392174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304079 ] 00:06:34.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.051 [2024-06-07 14:08:58.463724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.051 [2024-06-07 14:08:58.503168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.051 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.051 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.051 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.051 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.051 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.052 14:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.991 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:35.992 14:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.992 00:06:35.992 real 0m1.255s 00:06:35.992 user 0m1.151s 00:06:35.992 sys 0m0.116s 00:06:35.992 14:08:59 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:35.992 14:08:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:35.992 ************************************ 00:06:35.992 END TEST accel_crc32c 00:06:35.992 ************************************ 00:06:36.252 14:08:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:36.252 14:08:59 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:36.252 14:08:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:36.252 14:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.252 ************************************ 00:06:36.252 START TEST accel_crc32c_C2 00:06:36.252 ************************************ 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:36.252 [2024-06-07 14:08:59.724139] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:36.252 [2024-06-07 14:08:59.724243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304237 ] 00:06:36.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.252 [2024-06-07 14:08:59.803299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.252 [2024-06-07 14:08:59.840412] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:36.252 14:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.637 00:06:37.637 real 0m1.263s 00:06:37.637 user 0m1.158s 00:06:37.637 sys 0m0.116s 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:37.637 14:09:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:37.637 ************************************ 00:06:37.637 END TEST accel_crc32c_C2 00:06:37.637 ************************************ 00:06:37.637 14:09:00 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:37.637 14:09:00 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:37.637 14:09:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:37.637 14:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.637 ************************************ 00:06:37.637 START TEST accel_copy 00:06:37.637 ************************************ 00:06:37.637 14:09:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:37.637 [2024-06-07 14:09:01.059768] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:37.637 [2024-06-07 14:09:01.059838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304609 ] 00:06:37.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.637 [2024-06-07 14:09:01.126077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.637 [2024-06-07 14:09:01.157185] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.637 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:37.638 14:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:39.023 14:09:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.023 00:06:39.023 real 0m1.241s 00:06:39.023 user 0m1.141s 00:06:39.023 sys 0m0.111s 00:06:39.023 14:09:02 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.023 14:09:02 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.023 ************************************ 00:06:39.023 END TEST accel_copy 00:06:39.023 ************************************ 00:06:39.023 14:09:02 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.023 14:09:02 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:39.023 14:09:02 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.023 14:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.023 ************************************ 00:06:39.023 START TEST accel_fill 00:06:39.023 ************************************ 00:06:39.023 14:09:02 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:39.023 [2024-06-07 14:09:02.376951] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:39.023 [2024-06-07 14:09:02.377011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305008 ] 00:06:39.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.023 [2024-06-07 14:09:02.442990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.023 [2024-06-07 14:09:02.473851] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.023 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.024 14:09:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:39.965 14:09:03 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.965 00:06:39.965 real 0m1.241s 00:06:39.965 user 0m1.137s 00:06:39.965 sys 0m0.115s 00:06:39.965 14:09:03 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:39.965 14:09:03 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:39.965 ************************************ 00:06:39.965 END TEST accel_fill 00:06:39.965 ************************************ 00:06:40.225 14:09:03 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:40.225 14:09:03 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:40.225 14:09:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:40.225 14:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.225 ************************************ 00:06:40.225 START TEST accel_copy_crc32c 00:06:40.225 ************************************ 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:40.225 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:40.225 [2024-06-07 14:09:03.693643] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:40.225 [2024-06-07 14:09:03.693713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305361 ] 00:06:40.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.226 [2024-06-07 14:09:03.761273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.226 [2024-06-07 14:09:03.796221] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.226 14:09:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.611 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.612 00:06:41.612 real 0m1.246s 00:06:41.612 user 0m1.143s 00:06:41.612 sys 0m0.116s 00:06:41.612 14:09:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:41.612 14:09:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:41.612 ************************************ 00:06:41.612 END TEST accel_copy_crc32c 00:06:41.612 ************************************ 00:06:41.612 14:09:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.612 14:09:04 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:41.612 14:09:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:41.612 14:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.612 ************************************ 00:06:41.612 START TEST accel_copy_crc32c_C2 00:06:41.612 ************************************ 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.612 14:09:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:41.612 [2024-06-07 14:09:05.015158] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:41.612 [2024-06-07 14:09:05.015236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305539 ] 00:06:41.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.612 [2024-06-07 14:09:05.083311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.612 [2024-06-07 14:09:05.117372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.612 14:09:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.997 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.998 14:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.998 00:06:42.998 real 0m1.246s 00:06:42.998 user 0m1.147s 00:06:42.998 sys 0m0.110s 00:06:42.998 14:09:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:42.998 14:09:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:42.998 ************************************ 00:06:42.998 END TEST accel_copy_crc32c_C2 00:06:42.998 ************************************ 00:06:42.998 14:09:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:42.998 14:09:06 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:42.998 14:09:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:42.998 14:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.998 ************************************ 00:06:42.998 START TEST accel_dualcast 00:06:42.998 ************************************ 00:06:42.998 14:09:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:42.998 [2024-06-07 14:09:06.334446] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:42.998 [2024-06-07 14:09:06.334513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305740 ] 00:06:42.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.998 [2024-06-07 14:09:06.404289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.998 [2024-06-07 14:09:06.438416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:42.998 14:09:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:43.941 14:09:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.941 00:06:43.941 real 0m1.247s 00:06:43.941 user 0m1.147s 00:06:43.941 sys 0m0.111s 00:06:43.941 14:09:07 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.941 14:09:07 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 ************************************ 00:06:43.941 END TEST accel_dualcast 00:06:43.941 ************************************ 00:06:44.202 14:09:07 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:44.202 14:09:07 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:44.202 14:09:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.202 14:09:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.202 ************************************ 00:06:44.202 START TEST accel_compare 00:06:44.202 ************************************ 00:06:44.202 14:09:07 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:44.202 [2024-06-07 14:09:07.656668] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:44.202 [2024-06-07 14:09:07.656752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306095 ] 00:06:44.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.202 [2024-06-07 14:09:07.726352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.202 [2024-06-07 14:09:07.761437] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.202 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:44.203 14:09:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:45.588 14:09:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.588 00:06:45.588 real 0m1.251s 00:06:45.588 user 0m1.151s 00:06:45.588 sys 0m0.110s 00:06:45.588 14:09:08 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.588 14:09:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:45.588 ************************************ 00:06:45.588 END TEST accel_compare 00:06:45.588 ************************************ 00:06:45.588 14:09:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:45.588 14:09:08 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:06:45.588 14:09:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.588 14:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.588 ************************************ 00:06:45.588 START TEST accel_xor 00:06:45.588 ************************************ 00:06:45.588 14:09:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:45.588 14:09:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:45.588 [2024-06-07 14:09:08.980380] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:45.588 [2024-06-07 14:09:08.980463] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid306634 ] 00:06:45.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.588 [2024-06-07 14:09:09.050528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.588 [2024-06-07 14:09:09.086380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.588 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.588 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.588 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.588 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.588 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.589 14:09:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.974 00:06:46.974 real 0m1.251s 00:06:46.974 user 0m1.150s 00:06:46.974 sys 0m0.112s 00:06:46.974 14:09:10 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.974 14:09:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:46.974 ************************************ 00:06:46.974 END TEST accel_xor 00:06:46.974 ************************************ 00:06:46.974 14:09:10 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:46.974 14:09:10 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:46.974 14:09:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.974 14:09:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.974 ************************************ 00:06:46.974 START TEST accel_xor 00:06:46.974 ************************************ 00:06:46.974 14:09:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:46.974 [2024-06-07 14:09:10.306178] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:46.974 [2024-06-07 14:09:10.306268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307086 ] 00:06:46.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.974 [2024-06-07 14:09:10.374931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.974 [2024-06-07 14:09:10.408598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.974 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:46.975 14:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:47.917 14:09:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.917 00:06:47.917 real 0m1.247s 00:06:47.917 user 0m1.143s 00:06:47.917 sys 0m0.116s 00:06:47.917 14:09:11 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:47.917 14:09:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:47.917 ************************************ 00:06:47.917 END TEST accel_xor 00:06:47.917 ************************************ 00:06:47.917 14:09:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:47.917 14:09:11 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:47.917 14:09:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:47.917 14:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.178 ************************************ 00:06:48.178 START TEST accel_dif_verify 00:06:48.178 ************************************ 00:06:48.178 14:09:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:48.178 [2024-06-07 14:09:11.626670] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:48.178 [2024-06-07 14:09:11.626751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307341 ] 00:06:48.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.178 [2024-06-07 14:09:11.696030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.178 [2024-06-07 14:09:11.732292] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.178 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:48.179 14:09:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:49.566 14:09:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.566 00:06:49.566 real 0m1.251s 00:06:49.566 user 0m1.158s 00:06:49.566 sys 0m0.105s 00:06:49.566 14:09:12 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:49.566 14:09:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.566 ************************************ 00:06:49.566 END TEST accel_dif_verify 00:06:49.566 ************************************ 00:06:49.566 14:09:12 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:49.566 14:09:12 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:49.566 14:09:12 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:49.566 14:09:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.566 ************************************ 00:06:49.566 START TEST accel_dif_generate 00:06:49.566 ************************************ 00:06:49.566 14:09:12 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:49.566 14:09:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:49.566 [2024-06-07 14:09:12.951140] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:49.566 [2024-06-07 14:09:12.951229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307627 ] 00:06:49.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.566 [2024-06-07 14:09:13.019230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.566 [2024-06-07 14:09:13.052715] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.566 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:49.567 14:09:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.955 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.955 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:50.956 14:09:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.956 00:06:50.956 real 0m1.247s 00:06:50.956 user 0m1.153s 00:06:50.956 sys 0m0.106s 00:06:50.956 14:09:14 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.956 14:09:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:50.956 ************************************ 00:06:50.956 END TEST accel_dif_generate 00:06:50.956 ************************************ 00:06:50.956 14:09:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:50.956 14:09:14 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:50.956 14:09:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.956 14:09:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.956 ************************************ 00:06:50.956 START TEST accel_dif_generate_copy 00:06:50.956 ************************************ 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:50.956 [2024-06-07 14:09:14.270233] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:50.956 [2024-06-07 14:09:14.270311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307978 ] 00:06:50.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.956 [2024-06-07 14:09:14.339412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.956 [2024-06-07 14:09:14.376043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.956 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.957 14:09:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.899 00:06:51.899 real 0m1.251s 00:06:51.899 user 0m1.152s 00:06:51.899 sys 0m0.110s 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.899 14:09:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:51.899 ************************************ 00:06:51.899 END TEST accel_dif_generate_copy 00:06:51.899 ************************************ 00:06:51.899 14:09:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:51.899 14:09:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.899 14:09:15 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:51.899 14:09:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.899 14:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.161 ************************************ 00:06:52.161 START TEST accel_comp 00:06:52.161 ************************************ 00:06:52.161 14:09:15 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:52.161 [2024-06-07 14:09:15.596257] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:52.161 [2024-06-07 14:09:15.596324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308326 ] 00:06:52.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.161 [2024-06-07 14:09:15.662628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.161 [2024-06-07 14:09:15.694942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.161 14:09:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:53.547 14:09:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.547 00:06:53.547 real 0m1.245s 00:06:53.547 user 0m1.154s 00:06:53.547 sys 0m0.103s 00:06:53.547 14:09:16 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:53.547 14:09:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:53.547 ************************************ 00:06:53.547 END TEST accel_comp 00:06:53.547 ************************************ 00:06:53.547 14:09:16 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.547 14:09:16 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:53.547 14:09:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:53.547 14:09:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.547 ************************************ 00:06:53.547 START TEST accel_decomp 00:06:53.547 ************************************ 00:06:53.547 14:09:16 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:53.547 14:09:16 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:53.547 [2024-06-07 14:09:16.916453] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:53.547 [2024-06-07 14:09:16.916530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308522 ] 00:06:53.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.547 [2024-06-07 14:09:16.985689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.547 [2024-06-07 14:09:17.020236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.547 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.548 14:09:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.489 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.489 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.489 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.489 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.489 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.750 14:09:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.750 00:06:54.750 real 0m1.250s 00:06:54.750 user 0m1.156s 00:06:54.750 sys 0m0.108s 00:06:54.750 14:09:18 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.750 14:09:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:54.750 ************************************ 00:06:54.750 END TEST accel_decomp 00:06:54.750 ************************************ 00:06:54.750 14:09:18 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.750 14:09:18 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:54.750 14:09:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.750 14:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.750 ************************************ 00:06:54.750 START TEST accel_decomp_full 00:06:54.750 ************************************ 00:06:54.750 14:09:18 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:54.750 [2024-06-07 14:09:18.240255] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:54.750 [2024-06-07 14:09:18.240320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308715 ] 00:06:54.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.750 [2024-06-07 14:09:18.309758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.750 [2024-06-07 14:09:18.346218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.750 14:09:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:56.135 14:09:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.135 00:06:56.135 real 0m1.266s 00:06:56.135 user 0m1.164s 00:06:56.135 sys 0m0.114s 00:06:56.135 14:09:19 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.135 14:09:19 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:56.135 ************************************ 00:06:56.135 END TEST accel_decomp_full 00:06:56.135 ************************************ 00:06:56.135 14:09:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.135 14:09:19 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:56.135 14:09:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.135 14:09:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.135 ************************************ 00:06:56.135 START TEST accel_decomp_mcore 00:06:56.135 ************************************ 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.135 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:56.136 [2024-06-07 14:09:19.581084] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:56.136 [2024-06-07 14:09:19.581151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309062 ] 00:06:56.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.136 [2024-06-07 14:09:19.649696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.136 [2024-06-07 14:09:19.685793] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.136 [2024-06-07 14:09:19.685931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.136 [2024-06-07 14:09:19.686087] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.136 [2024-06-07 14:09:19.686088] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.136 14:09:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.521 00:06:57.521 real 0m1.258s 00:06:57.521 user 0m4.386s 00:06:57.521 sys 0m0.119s 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:57.521 14:09:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:57.521 ************************************ 00:06:57.521 END TEST accel_decomp_mcore 00:06:57.521 ************************************ 00:06:57.521 14:09:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.521 14:09:20 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:57.522 14:09:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:57.522 14:09:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.522 ************************************ 00:06:57.522 START TEST accel_decomp_full_mcore 00:06:57.522 ************************************ 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:57.522 14:09:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:57.522 [2024-06-07 14:09:20.912466] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:57.522 [2024-06-07 14:09:20.912520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309423 ] 00:06:57.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.522 [2024-06-07 14:09:20.978723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.522 [2024-06-07 14:09:21.013465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.522 [2024-06-07 14:09:21.013585] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.522 [2024-06-07 14:09:21.013742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.522 [2024-06-07 14:09:21.013743] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.522 14:09:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.532 00:06:58.532 real 0m1.268s 00:06:58.532 user 0m4.439s 00:06:58.532 sys 0m0.118s 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.532 14:09:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:58.532 ************************************ 00:06:58.532 END TEST accel_decomp_full_mcore 00:06:58.532 ************************************ 00:06:58.792 14:09:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.792 14:09:22 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:58.792 14:09:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:58.792 14:09:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.792 ************************************ 00:06:58.792 START TEST accel_decomp_mthread 00:06:58.792 ************************************ 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:58.792 [2024-06-07 14:09:22.255486] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:06:58.792 [2024-06-07 14:09:22.255562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309736 ] 00:06:58.792 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.792 [2024-06-07 14:09:22.324486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.792 [2024-06-07 14:09:22.358341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.792 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.793 14:09:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.173 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.174 00:07:00.174 real 0m1.254s 00:07:00.174 user 0m1.159s 00:07:00.174 sys 0m0.108s 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:00.174 14:09:23 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:00.174 ************************************ 00:07:00.174 END TEST accel_decomp_mthread 00:07:00.174 ************************************ 00:07:00.174 14:09:23 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:00.174 14:09:23 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:00.174 14:09:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:00.174 14:09:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.174 ************************************ 00:07:00.174 START TEST accel_decomp_full_mthread 00:07:00.174 ************************************ 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:00.174 [2024-06-07 14:09:23.583746] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:00.174 [2024-06-07 14:09:23.583814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid309893 ] 00:07:00.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.174 [2024-06-07 14:09:23.652370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.174 [2024-06-07 14:09:23.688305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.174 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.175 14:09:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.556 00:07:01.556 real 0m1.284s 00:07:01.556 user 0m1.184s 00:07:01.556 sys 0m0.114s 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.556 14:09:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:01.556 ************************************ 00:07:01.556 END TEST accel_decomp_full_mthread 00:07:01.556 ************************************ 00:07:01.556 14:09:24 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:01.556 14:09:24 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:01.556 14:09:24 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:01.556 14:09:24 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:01.556 14:09:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.556 14:09:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.556 14:09:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.556 14:09:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.556 14:09:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.556 14:09:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.556 14:09:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.556 14:09:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:01.556 14:09:24 accel -- accel/accel.sh@41 -- # jq -r . 00:07:01.556 ************************************ 00:07:01.556 START TEST accel_dif_functional_tests 00:07:01.556 ************************************ 00:07:01.556 14:09:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:01.556 [2024-06-07 14:09:24.970515] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:01.556 [2024-06-07 14:09:24.970577] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310159 ] 00:07:01.556 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.557 [2024-06-07 14:09:25.040169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.557 [2024-06-07 14:09:25.081234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.557 [2024-06-07 14:09:25.081299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.557 [2024-06-07 14:09:25.081303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.557 00:07:01.557 00:07:01.557 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.557 http://cunit.sourceforge.net/ 00:07:01.557 00:07:01.557 00:07:01.557 Suite: accel_dif 00:07:01.557 Test: verify: DIF generated, GUARD check ...passed 00:07:01.557 Test: verify: DIF generated, APPTAG check ...passed 00:07:01.557 Test: verify: DIF generated, REFTAG check ...passed 00:07:01.557 Test: verify: DIF not generated, GUARD check ...[2024-06-07 14:09:25.132454] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:01.557 passed 00:07:01.557 Test: verify: DIF not generated, APPTAG check ...[2024-06-07 14:09:25.132498] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:01.557 passed 00:07:01.557 Test: verify: DIF not generated, REFTAG check ...[2024-06-07 14:09:25.132519] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:01.557 passed 00:07:01.557 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:01.557 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-07 14:09:25.132567] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:01.557 passed 00:07:01.557 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:01.557 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:01.557 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:01.557 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-07 14:09:25.132681] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:01.557 passed 00:07:01.557 Test: verify copy: DIF generated, GUARD check ...passed 00:07:01.557 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:01.557 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:01.557 Test: verify copy: DIF not generated, GUARD check ...[2024-06-07 14:09:25.132802] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:01.557 passed 00:07:01.557 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-07 14:09:25.132828] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:01.557 passed 00:07:01.557 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-07 14:09:25.132850] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:01.557 passed 00:07:01.557 Test: generate copy: DIF generated, GUARD check ...passed 00:07:01.557 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:01.557 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:01.557 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:01.557 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:01.557 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:01.557 Test: generate copy: iovecs-len validate ...[2024-06-07 14:09:25.133041] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:01.557 passed 00:07:01.557 Test: generate copy: buffer alignment validate ...passed 00:07:01.557 00:07:01.557 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.557 suites 1 1 n/a 0 0 00:07:01.557 tests 26 26 26 0 0 00:07:01.557 asserts 115 115 115 0 n/a 00:07:01.557 00:07:01.557 Elapsed time = 0.000 seconds 00:07:01.818 00:07:01.818 real 0m0.318s 00:07:01.818 user 0m0.432s 00:07:01.818 sys 0m0.138s 00:07:01.818 14:09:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.818 14:09:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:01.818 ************************************ 00:07:01.818 END TEST accel_dif_functional_tests 00:07:01.818 ************************************ 00:07:01.818 00:07:01.818 real 0m29.130s 00:07:01.818 user 0m32.561s 00:07:01.818 sys 0m4.299s 00:07:01.818 14:09:25 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.818 14:09:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.818 ************************************ 00:07:01.818 END TEST accel 00:07:01.818 ************************************ 00:07:01.818 14:09:25 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:01.818 14:09:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:01.818 14:09:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.818 14:09:25 -- common/autotest_common.sh@10 -- # set +x 00:07:01.818 ************************************ 00:07:01.818 START TEST accel_rpc 00:07:01.818 ************************************ 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:01.818 * Looking for test storage... 00:07:01.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:01.818 14:09:25 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:01.818 14:09:25 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=310399 00:07:01.818 14:09:25 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 310399 00:07:01.818 14:09:25 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 310399 ']' 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:01.818 14:09:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.078 [2024-06-07 14:09:25.510914] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:02.078 [2024-06-07 14:09:25.510983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310399 ] 00:07:02.078 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.078 [2024-06-07 14:09:25.583760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.078 [2024-06-07 14:09:25.623304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.649 14:09:26 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:02.649 14:09:26 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:02.649 14:09:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:02.649 14:09:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:02.649 14:09:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:02.649 14:09:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:02.649 14:09:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:02.649 14:09:26 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.649 14:09:26 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.649 14:09:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 ************************************ 00:07:02.910 START TEST accel_assign_opcode 00:07:02.910 ************************************ 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 [2024-06-07 14:09:26.317347] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 [2024-06-07 14:09:26.329366] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:02.910 software 00:07:02.910 00:07:02.910 real 0m0.194s 00:07:02.910 user 0m0.051s 00:07:02.910 sys 0m0.010s 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.910 14:09:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:02.910 ************************************ 00:07:02.910 END TEST accel_assign_opcode 00:07:02.910 ************************************ 00:07:02.910 14:09:26 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 310399 00:07:02.910 14:09:26 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 310399 ']' 00:07:02.910 14:09:26 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 310399 00:07:02.910 14:09:26 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:02.910 14:09:26 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:02.910 14:09:26 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 310399 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 310399' 00:07:03.170 killing process with pid 310399 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@968 -- # kill 310399 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@973 -- # wait 310399 00:07:03.170 00:07:03.170 real 0m1.444s 00:07:03.170 user 0m1.513s 00:07:03.170 sys 0m0.427s 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.170 14:09:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.170 ************************************ 00:07:03.170 END TEST accel_rpc 00:07:03.170 ************************************ 00:07:03.431 14:09:26 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.431 14:09:26 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:03.431 14:09:26 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.431 14:09:26 -- common/autotest_common.sh@10 -- # set +x 00:07:03.431 ************************************ 00:07:03.431 START TEST app_cmdline 00:07:03.431 ************************************ 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:03.431 * Looking for test storage... 00:07:03.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.431 14:09:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.431 14:09:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=310712 00:07:03.431 14:09:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 310712 00:07:03.431 14:09:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 310712 ']' 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:03.431 14:09:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.431 [2024-06-07 14:09:27.030778] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:03.431 [2024-06-07 14:09:27.030852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310712 ] 00:07:03.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.691 [2024-06-07 14:09:27.105010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.691 [2024-06-07 14:09:27.145306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.264 14:09:27 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.264 14:09:27 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:04.264 14:09:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:04.524 { 00:07:04.524 "version": "SPDK v24.09-pre git sha1 e55c9a812", 00:07:04.524 "fields": { 00:07:04.524 "major": 24, 00:07:04.524 "minor": 9, 00:07:04.524 "patch": 0, 00:07:04.524 "suffix": "-pre", 00:07:04.524 "commit": "e55c9a812" 00:07:04.524 } 00:07:04.524 } 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.524 14:09:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.524 14:09:27 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.524 14:09:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.524 14:09:27 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.524 14:09:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.524 14:09:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.524 14:09:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.524 14:09:28 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:04.524 14:09:28 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:04.525 14:09:28 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.525 request: 00:07:04.525 { 00:07:04.525 "method": "env_dpdk_get_mem_stats", 00:07:04.525 "req_id": 1 00:07:04.525 } 00:07:04.525 Got JSON-RPC error response 00:07:04.525 response: 00:07:04.525 { 00:07:04.525 "code": -32601, 00:07:04.525 "message": "Method not found" 00:07:04.525 } 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:04.786 14:09:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 310712 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 310712 ']' 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 310712 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 310712 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 310712' 00:07:04.786 killing process with pid 310712 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@968 -- # kill 310712 00:07:04.786 14:09:28 app_cmdline -- common/autotest_common.sh@973 -- # wait 310712 00:07:05.046 00:07:05.046 real 0m1.568s 00:07:05.046 user 0m1.866s 00:07:05.046 sys 0m0.434s 00:07:05.046 14:09:28 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.046 14:09:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.046 ************************************ 00:07:05.046 END TEST app_cmdline 00:07:05.046 ************************************ 00:07:05.046 14:09:28 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.046 14:09:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:05.046 14:09:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.046 14:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.046 ************************************ 00:07:05.047 START TEST version 00:07:05.047 ************************************ 00:07:05.047 14:09:28 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:05.047 * Looking for test storage... 00:07:05.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:05.047 14:09:28 version -- app/version.sh@17 -- # get_header_version major 00:07:05.047 14:09:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # cut -f2 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.047 14:09:28 version -- app/version.sh@17 -- # major=24 00:07:05.047 14:09:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.047 14:09:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # cut -f2 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.047 14:09:28 version -- app/version.sh@18 -- # minor=9 00:07:05.047 14:09:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.047 14:09:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # cut -f2 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.047 14:09:28 version -- app/version.sh@19 -- # patch=0 00:07:05.047 14:09:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.047 14:09:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # cut -f2 00:07:05.047 14:09:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.047 14:09:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.047 14:09:28 version -- app/version.sh@22 -- # version=24.9 00:07:05.047 14:09:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.047 14:09:28 version -- app/version.sh@28 -- # version=24.9rc0 00:07:05.047 14:09:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:05.047 14:09:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.047 14:09:28 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:05.047 14:09:28 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:05.047 00:07:05.047 real 0m0.181s 00:07:05.047 user 0m0.094s 00:07:05.047 sys 0m0.127s 00:07:05.307 14:09:28 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.307 14:09:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 ************************************ 00:07:05.307 END TEST version 00:07:05.307 ************************************ 00:07:05.307 14:09:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@198 -- # uname -s 00:07:05.307 14:09:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:05.307 14:09:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.307 14:09:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:05.307 14:09:28 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:05.307 14:09:28 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:05.307 14:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 14:09:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:05.307 14:09:28 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:05.307 14:09:28 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.307 14:09:28 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:05.307 14:09:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.307 14:09:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.307 ************************************ 00:07:05.307 START TEST nvmf_tcp 00:07:05.307 ************************************ 00:07:05.307 14:09:28 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:05.307 * Looking for test storage... 00:07:05.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.307 14:09:28 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.569 14:09:28 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.569 14:09:28 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.569 14:09:28 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.569 14:09:28 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:28 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:28 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:28 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:05.569 14:09:28 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:05.569 14:09:28 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:05.569 14:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:05.569 14:09:28 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.569 14:09:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:05.569 14:09:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.569 14:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:05.569 ************************************ 00:07:05.569 START TEST nvmf_example 00:07:05.569 ************************************ 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:05.569 * Looking for test storage... 00:07:05.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.569 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.570 14:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:13.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:13.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.709 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:13.710 Found net devices under 0000:31:00.0: cvl_0_0 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:13.710 Found net devices under 0000:31:00.1: cvl_0_1 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.710 14:09:36 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:07:13.710 00:07:13.710 --- 10.0.0.2 ping statistics --- 00:07:13.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.710 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:07:13.710 00:07:13.710 --- 10.0.0.1 ping statistics --- 00:07:13.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.710 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=315421 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 315421 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 315421 ']' 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.710 14:09:37 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.653 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:14.654 14:09:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:14.654 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.882 Initializing NVMe Controllers 00:07:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:26.882 Initialization complete. Launching workers. 00:07:26.882 ======================================================== 00:07:26.882 Latency(us) 00:07:26.883 Device Information : IOPS MiB/s Average min max 00:07:26.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18611.84 72.70 3438.23 690.60 16176.01 00:07:26.883 ======================================================== 00:07:26.883 Total : 18611.84 72.70 3438.23 690.60 16176.01 00:07:26.883 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.883 rmmod nvme_tcp 00:07:26.883 rmmod nvme_fabrics 00:07:26.883 rmmod nvme_keyring 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 315421 ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 315421 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 315421 ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 315421 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 315421 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 315421' 00:07:26.883 killing process with pid 315421 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 315421 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 315421 00:07:26.883 nvmf threads initialize successfully 00:07:26.883 bdev subsystem init successfully 00:07:26.883 created a nvmf target service 00:07:26.883 create targets's poll groups done 00:07:26.883 all subsystems of target started 00:07:26.883 nvmf target is running 00:07:26.883 all subsystems of target stopped 00:07:26.883 destroy targets's poll groups done 00:07:26.883 destroyed the nvmf target service 00:07:26.883 bdev subsystem finish successfully 00:07:26.883 nvmf threads destroy successfully 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.883 14:09:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.144 14:09:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:27.144 14:09:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:27.144 14:09:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:27.144 14:09:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.406 00:07:27.406 real 0m21.800s 00:07:27.406 user 0m46.948s 00:07:27.406 sys 0m7.001s 00:07:27.406 14:09:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.406 14:09:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.406 ************************************ 00:07:27.406 END TEST nvmf_example 00:07:27.406 ************************************ 00:07:27.406 14:09:50 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:27.406 14:09:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:27.406 14:09:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:27.406 14:09:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.406 ************************************ 00:07:27.406 START TEST nvmf_filesystem 00:07:27.406 ************************************ 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:27.406 * Looking for test storage... 00:07:27.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:27.406 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:27.407 14:09:50 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:27.407 #define SPDK_CONFIG_H 00:07:27.407 #define SPDK_CONFIG_APPS 1 00:07:27.407 #define SPDK_CONFIG_ARCH native 00:07:27.407 #undef SPDK_CONFIG_ASAN 00:07:27.407 #undef SPDK_CONFIG_AVAHI 00:07:27.407 #undef SPDK_CONFIG_CET 00:07:27.407 #define SPDK_CONFIG_COVERAGE 1 00:07:27.407 #define SPDK_CONFIG_CROSS_PREFIX 00:07:27.407 #undef SPDK_CONFIG_CRYPTO 00:07:27.407 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:27.407 #undef SPDK_CONFIG_CUSTOMOCF 00:07:27.407 #undef SPDK_CONFIG_DAOS 00:07:27.407 #define SPDK_CONFIG_DAOS_DIR 00:07:27.407 #define SPDK_CONFIG_DEBUG 1 00:07:27.407 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:27.407 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.407 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:27.407 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.407 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:27.407 #undef SPDK_CONFIG_DPDK_UADK 00:07:27.407 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:27.407 #define SPDK_CONFIG_EXAMPLES 1 00:07:27.407 #undef SPDK_CONFIG_FC 00:07:27.407 #define SPDK_CONFIG_FC_PATH 00:07:27.407 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:27.407 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:27.407 #undef SPDK_CONFIG_FUSE 00:07:27.407 #undef SPDK_CONFIG_FUZZER 00:07:27.407 #define SPDK_CONFIG_FUZZER_LIB 00:07:27.407 #undef SPDK_CONFIG_GOLANG 00:07:27.407 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:27.407 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:27.407 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:27.407 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:27.407 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:27.407 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:27.407 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:27.407 #define SPDK_CONFIG_IDXD 1 00:07:27.407 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:27.407 #undef SPDK_CONFIG_IPSEC_MB 00:07:27.407 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:27.407 #define SPDK_CONFIG_ISAL 1 00:07:27.407 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:27.407 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:27.407 #define SPDK_CONFIG_LIBDIR 00:07:27.407 #undef SPDK_CONFIG_LTO 00:07:27.407 #define SPDK_CONFIG_MAX_LCORES 00:07:27.407 #define SPDK_CONFIG_NVME_CUSE 1 00:07:27.407 #undef SPDK_CONFIG_OCF 00:07:27.407 #define SPDK_CONFIG_OCF_PATH 00:07:27.407 #define SPDK_CONFIG_OPENSSL_PATH 00:07:27.407 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:27.407 #define SPDK_CONFIG_PGO_DIR 00:07:27.407 #undef SPDK_CONFIG_PGO_USE 00:07:27.407 #define SPDK_CONFIG_PREFIX /usr/local 00:07:27.407 #undef SPDK_CONFIG_RAID5F 00:07:27.407 #undef SPDK_CONFIG_RBD 00:07:27.407 #define SPDK_CONFIG_RDMA 1 00:07:27.407 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:27.407 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:27.407 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:27.407 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:27.407 #define SPDK_CONFIG_SHARED 1 00:07:27.407 #undef SPDK_CONFIG_SMA 00:07:27.407 #define SPDK_CONFIG_TESTS 1 00:07:27.407 #undef SPDK_CONFIG_TSAN 00:07:27.407 #define SPDK_CONFIG_UBLK 1 00:07:27.407 #define SPDK_CONFIG_UBSAN 1 00:07:27.407 #undef SPDK_CONFIG_UNIT_TESTS 00:07:27.407 #undef SPDK_CONFIG_URING 00:07:27.407 #define SPDK_CONFIG_URING_PATH 00:07:27.407 #undef SPDK_CONFIG_URING_ZNS 00:07:27.407 #undef SPDK_CONFIG_USDT 00:07:27.407 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:27.407 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:27.407 #define SPDK_CONFIG_VFIO_USER 1 00:07:27.407 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:27.407 #define SPDK_CONFIG_VHOST 1 00:07:27.407 #define SPDK_CONFIG_VIRTIO 1 00:07:27.407 #undef SPDK_CONFIG_VTUNE 00:07:27.407 #define SPDK_CONFIG_VTUNE_DIR 00:07:27.407 #define SPDK_CONFIG_WERROR 1 00:07:27.407 #define SPDK_CONFIG_WPDK_DIR 00:07:27.407 #undef SPDK_CONFIG_XNVME 00:07:27.407 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:27.407 14:09:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:27.408 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:27.671 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 318226 ]] 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 318226 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:27.672 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.XthomP 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.XthomP/tests/target /tmp/spdk.XthomP 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122192531456 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370943488 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=7178412032 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64629833728 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685469696 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864241152 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874190336 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9949184 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=353280 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=150528 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684879872 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685473792 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=593920 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937089024 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937093120 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:27.673 * Looking for test storage... 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122192531456 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=9393004544 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:27.673 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:27.674 14:09:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:35.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:35.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.862 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:35.863 Found net devices under 0000:31:00.0: cvl_0_0 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:35.863 Found net devices under 0000:31:00.1: cvl_0_1 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.863 14:09:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:07:35.863 00:07:35.863 --- 10.0.0.2 ping statistics --- 00:07:35.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.863 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:07:35.863 00:07:35.863 --- 10.0.0.1 ping statistics --- 00:07:35.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.863 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.863 ************************************ 00:07:35.863 START TEST nvmf_filesystem_no_in_capsule 00:07:35.863 ************************************ 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=322525 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 322525 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 322525 ']' 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:35.863 14:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.863 [2024-06-07 14:09:59.286248] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:35.863 [2024-06-07 14:09:59.286306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.863 [2024-06-07 14:09:59.366434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.863 [2024-06-07 14:09:59.408327] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.863 [2024-06-07 14:09:59.408371] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.863 [2024-06-07 14:09:59.408379] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.863 [2024-06-07 14:09:59.408386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.863 [2024-06-07 14:09:59.408391] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.863 [2024-06-07 14:09:59.408540] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.863 [2024-06-07 14:09:59.408642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.863 [2024-06-07 14:09:59.408802] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.863 [2024-06-07 14:09:59.408803] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.435 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:36.435 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:36.435 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.435 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:36.435 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 [2024-06-07 14:10:00.121957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 [2024-06-07 14:10:00.255466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:36.695 { 00:07:36.695 "name": "Malloc1", 00:07:36.695 "aliases": [ 00:07:36.695 "4af35ef8-0dc4-4b65-b1c5-c5cec4c1a977" 00:07:36.695 ], 00:07:36.695 "product_name": "Malloc disk", 00:07:36.695 "block_size": 512, 00:07:36.695 "num_blocks": 1048576, 00:07:36.695 "uuid": "4af35ef8-0dc4-4b65-b1c5-c5cec4c1a977", 00:07:36.695 "assigned_rate_limits": { 00:07:36.695 "rw_ios_per_sec": 0, 00:07:36.695 "rw_mbytes_per_sec": 0, 00:07:36.695 "r_mbytes_per_sec": 0, 00:07:36.695 "w_mbytes_per_sec": 0 00:07:36.695 }, 00:07:36.695 "claimed": true, 00:07:36.695 "claim_type": "exclusive_write", 00:07:36.695 "zoned": false, 00:07:36.695 "supported_io_types": { 00:07:36.695 "read": true, 00:07:36.695 "write": true, 00:07:36.695 "unmap": true, 00:07:36.695 "write_zeroes": true, 00:07:36.695 "flush": true, 00:07:36.695 "reset": true, 00:07:36.695 "compare": false, 00:07:36.695 "compare_and_write": false, 00:07:36.695 "abort": true, 00:07:36.695 "nvme_admin": false, 00:07:36.695 "nvme_io": false 00:07:36.695 }, 00:07:36.695 "memory_domains": [ 00:07:36.695 { 00:07:36.695 "dma_device_id": "system", 00:07:36.695 "dma_device_type": 1 00:07:36.695 }, 00:07:36.695 { 00:07:36.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.695 "dma_device_type": 2 00:07:36.695 } 00:07:36.695 ], 00:07:36.695 "driver_specific": {} 00:07:36.695 } 00:07:36.695 ]' 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:36.695 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:36.956 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:36.956 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:36.956 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:36.956 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.956 14:10:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.337 14:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.337 14:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:38.337 14:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.337 14:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:38.337 14:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:40.251 14:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:40.821 14:10:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.821 14:10:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.763 ************************************ 00:07:41.763 START TEST filesystem_ext4 00:07:41.763 ************************************ 00:07:41.763 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:41.764 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:41.764 mke2fs 1.46.5 (30-Dec-2021) 00:07:42.024 Discarding device blocks: 0/522240 done 00:07:42.024 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:42.024 Filesystem UUID: 09b70a8d-6d39-45c9-8416-6047c35efa1e 00:07:42.024 Superblock backups stored on blocks: 00:07:42.024 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:42.024 00:07:42.024 Allocating group tables: 0/64 done 00:07:42.024 Writing inode tables: 0/64 done 00:07:42.024 Creating journal (8192 blocks): done 00:07:42.024 Writing superblocks and filesystem accounting information: 0/64 done 00:07:42.024 00:07:42.024 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:42.024 14:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 322525 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.967 00:07:42.967 real 0m1.160s 00:07:42.967 user 0m0.023s 00:07:42.967 sys 0m0.051s 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 ************************************ 00:07:42.967 END TEST filesystem_ext4 00:07:42.967 ************************************ 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.967 ************************************ 00:07:42.967 START TEST filesystem_btrfs 00:07:42.967 ************************************ 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:42.967 14:10:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.539 btrfs-progs v6.6.2 00:07:43.539 See https://btrfs.readthedocs.io for more information. 00:07:43.539 00:07:43.539 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.539 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.539 this does not affect your deployments: 00:07:43.539 - DUP for metadata (-m dup) 00:07:43.539 - enabled no-holes (-O no-holes) 00:07:43.539 - enabled free-space-tree (-R free-space-tree) 00:07:43.539 00:07:43.539 Label: (null) 00:07:43.539 UUID: 2721f812-d7c1-49c7-a7e2-f767041c6fae 00:07:43.539 Node size: 16384 00:07:43.539 Sector size: 4096 00:07:43.539 Filesystem size: 510.00MiB 00:07:43.539 Block group profiles: 00:07:43.539 Data: single 8.00MiB 00:07:43.539 Metadata: DUP 32.00MiB 00:07:43.539 System: DUP 8.00MiB 00:07:43.539 SSD detected: yes 00:07:43.539 Zoned device: no 00:07:43.539 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.539 Runtime features: free-space-tree 00:07:43.539 Checksum: crc32c 00:07:43.539 Number of devices: 1 00:07:43.539 Devices: 00:07:43.539 ID SIZE PATH 00:07:43.539 1 510.00MiB /dev/nvme0n1p1 00:07:43.539 00:07:43.539 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:43.539 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 322525 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.801 00:07:43.801 real 0m0.646s 00:07:43.801 user 0m0.023s 00:07:43.801 sys 0m0.064s 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 ************************************ 00:07:43.801 END TEST filesystem_btrfs 00:07:43.801 ************************************ 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.801 ************************************ 00:07:43.801 START TEST filesystem_xfs 00:07:43.801 ************************************ 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:43.801 14:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:43.801 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:43.801 = sectsz=512 attr=2, projid32bit=1 00:07:43.801 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:43.801 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:43.801 data = bsize=4096 blocks=130560, imaxpct=25 00:07:43.801 = sunit=0 swidth=0 blks 00:07:43.801 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:43.801 log =internal log bsize=4096 blocks=16384, version=2 00:07:43.801 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:43.801 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.187 Discarding blocks...Done. 00:07:45.187 14:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:07:45.187 14:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:47.099 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 322525 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:47.359 00:07:47.359 real 0m3.432s 00:07:47.359 user 0m0.021s 00:07:47.359 sys 0m0.057s 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:47.359 ************************************ 00:07:47.359 END TEST filesystem_xfs 00:07:47.359 ************************************ 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:47.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 322525 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 322525 ']' 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 322525 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:47.359 14:10:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 322525 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 322525' 00:07:47.619 killing process with pid 322525 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 322525 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 322525 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:47.619 00:07:47.619 real 0m12.028s 00:07:47.619 user 0m47.426s 00:07:47.619 sys 0m1.058s 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.619 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.619 ************************************ 00:07:47.619 END TEST nvmf_filesystem_no_in_capsule 00:07:47.619 ************************************ 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.878 ************************************ 00:07:47.878 START TEST nvmf_filesystem_in_capsule 00:07:47.878 ************************************ 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=325106 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 325106 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 325106 ']' 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:47.878 14:10:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.878 [2024-06-07 14:10:11.388934] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:07:47.878 [2024-06-07 14:10:11.388982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.878 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.878 [2024-06-07 14:10:11.460523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.878 [2024-06-07 14:10:11.494406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.878 [2024-06-07 14:10:11.494444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.878 [2024-06-07 14:10:11.494451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.878 [2024-06-07 14:10:11.494458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.878 [2024-06-07 14:10:11.494464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.878 [2024-06-07 14:10:11.494607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.878 [2024-06-07 14:10:11.494734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.878 [2024-06-07 14:10:11.494890] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.878 [2024-06-07 14:10:11.494891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 [2024-06-07 14:10:12.208929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.816 [2024-06-07 14:10:12.336432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:48.816 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:07:48.817 { 00:07:48.817 "name": "Malloc1", 00:07:48.817 "aliases": [ 00:07:48.817 "ed59f597-57d1-44df-9b41-9b37d9322855" 00:07:48.817 ], 00:07:48.817 "product_name": "Malloc disk", 00:07:48.817 "block_size": 512, 00:07:48.817 "num_blocks": 1048576, 00:07:48.817 "uuid": "ed59f597-57d1-44df-9b41-9b37d9322855", 00:07:48.817 "assigned_rate_limits": { 00:07:48.817 "rw_ios_per_sec": 0, 00:07:48.817 "rw_mbytes_per_sec": 0, 00:07:48.817 "r_mbytes_per_sec": 0, 00:07:48.817 "w_mbytes_per_sec": 0 00:07:48.817 }, 00:07:48.817 "claimed": true, 00:07:48.817 "claim_type": "exclusive_write", 00:07:48.817 "zoned": false, 00:07:48.817 "supported_io_types": { 00:07:48.817 "read": true, 00:07:48.817 "write": true, 00:07:48.817 "unmap": true, 00:07:48.817 "write_zeroes": true, 00:07:48.817 "flush": true, 00:07:48.817 "reset": true, 00:07:48.817 "compare": false, 00:07:48.817 "compare_and_write": false, 00:07:48.817 "abort": true, 00:07:48.817 "nvme_admin": false, 00:07:48.817 "nvme_io": false 00:07:48.817 }, 00:07:48.817 "memory_domains": [ 00:07:48.817 { 00:07:48.817 "dma_device_id": "system", 00:07:48.817 "dma_device_type": 1 00:07:48.817 }, 00:07:48.817 { 00:07:48.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.817 "dma_device_type": 2 00:07:48.817 } 00:07:48.817 ], 00:07:48.817 "driver_specific": {} 00:07:48.817 } 00:07:48.817 ]' 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:48.817 14:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:50.723 14:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:50.723 14:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:07:50.723 14:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:07:50.723 14:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:07:50.723 14:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:52.632 14:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:52.892 14:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:52.892 14:10:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.274 ************************************ 00:07:54.274 START TEST filesystem_in_capsule_ext4 00:07:54.274 ************************************ 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:07:54.274 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:07:54.275 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:07:54.275 14:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:54.275 mke2fs 1.46.5 (30-Dec-2021) 00:07:54.275 Discarding device blocks: 0/522240 done 00:07:54.275 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:54.275 Filesystem UUID: 9ccf3017-5fac-42de-9baf-460a14bfe676 00:07:54.275 Superblock backups stored on blocks: 00:07:54.275 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:54.275 00:07:54.275 Allocating group tables: 0/64 done 00:07:54.275 Writing inode tables: 0/64 done 00:07:56.859 Creating journal (8192 blocks): done 00:07:57.119 Writing superblocks and filesystem accounting information: 0/64 done 00:07:57.119 00:07:57.119 14:10:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:07:57.120 14:10:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.689 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 325106 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.950 00:07:57.950 real 0m3.817s 00:07:57.950 user 0m0.030s 00:07:57.950 sys 0m0.043s 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.950 ************************************ 00:07:57.950 END TEST filesystem_in_capsule_ext4 00:07:57.950 ************************************ 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.950 ************************************ 00:07:57.950 START TEST filesystem_in_capsule_btrfs 00:07:57.950 ************************************ 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:07:57.950 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.520 btrfs-progs v6.6.2 00:07:58.520 See https://btrfs.readthedocs.io for more information. 00:07:58.520 00:07:58.520 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.520 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.520 this does not affect your deployments: 00:07:58.520 - DUP for metadata (-m dup) 00:07:58.520 - enabled no-holes (-O no-holes) 00:07:58.520 - enabled free-space-tree (-R free-space-tree) 00:07:58.520 00:07:58.520 Label: (null) 00:07:58.520 UUID: fd748f8b-48c3-4d21-bd0a-a8d00913cce5 00:07:58.520 Node size: 16384 00:07:58.520 Sector size: 4096 00:07:58.520 Filesystem size: 510.00MiB 00:07:58.520 Block group profiles: 00:07:58.520 Data: single 8.00MiB 00:07:58.520 Metadata: DUP 32.00MiB 00:07:58.520 System: DUP 8.00MiB 00:07:58.520 SSD detected: yes 00:07:58.520 Zoned device: no 00:07:58.520 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.520 Runtime features: free-space-tree 00:07:58.520 Checksum: crc32c 00:07:58.520 Number of devices: 1 00:07:58.520 Devices: 00:07:58.520 ID SIZE PATH 00:07:58.520 1 510.00MiB /dev/nvme0n1p1 00:07:58.520 00:07:58.520 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:07:58.520 14:10:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 325106 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.462 00:07:59.462 real 0m1.322s 00:07:59.462 user 0m0.025s 00:07:59.462 sys 0m0.065s 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.462 ************************************ 00:07:59.462 END TEST filesystem_in_capsule_btrfs 00:07:59.462 ************************************ 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.462 ************************************ 00:07:59.462 START TEST filesystem_in_capsule_xfs 00:07:59.462 ************************************ 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:07:59.462 14:10:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:59.462 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:59.462 = sectsz=512 attr=2, projid32bit=1 00:07:59.462 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:59.462 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:59.462 data = bsize=4096 blocks=130560, imaxpct=25 00:07:59.462 = sunit=0 swidth=0 blks 00:07:59.462 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:59.462 log =internal log bsize=4096 blocks=16384, version=2 00:07:59.462 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:59.462 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:00.402 Discarding blocks...Done. 00:08:00.402 14:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:08:00.402 14:10:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 325106 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.312 00:08:02.312 real 0m3.025s 00:08:02.312 user 0m0.021s 00:08:02.312 sys 0m0.055s 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.312 ************************************ 00:08:02.312 END TEST filesystem_in_capsule_xfs 00:08:02.312 ************************************ 00:08:02.312 14:10:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.882 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:03.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 325106 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 325106 ']' 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 325106 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:03.142 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 325106 00:08:03.403 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:03.403 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:03.403 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 325106' 00:08:03.403 killing process with pid 325106 00:08:03.403 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 325106 00:08:03.403 14:10:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 325106 00:08:03.403 14:10:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:03.403 00:08:03.403 real 0m15.708s 00:08:03.403 user 1m2.183s 00:08:03.403 sys 0m1.058s 00:08:03.403 14:10:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:03.403 14:10:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.403 ************************************ 00:08:03.403 END TEST nvmf_filesystem_in_capsule 00:08:03.403 ************************************ 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:03.663 rmmod nvme_tcp 00:08:03.663 rmmod nvme_fabrics 00:08:03.663 rmmod nvme_keyring 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.663 14:10:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.575 14:10:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:05.836 00:08:05.836 real 0m38.335s 00:08:05.836 user 1m52.053s 00:08:05.836 sys 0m8.176s 00:08:05.836 14:10:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:05.836 14:10:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:05.836 ************************************ 00:08:05.836 END TEST nvmf_filesystem 00:08:05.836 ************************************ 00:08:05.836 14:10:29 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.836 14:10:29 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:05.836 14:10:29 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:05.836 14:10:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.836 ************************************ 00:08:05.836 START TEST nvmf_target_discovery 00:08:05.836 ************************************ 00:08:05.836 14:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:05.836 * Looking for test storage... 00:08:05.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:05.837 14:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:13.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:13.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:13.976 Found net devices under 0000:31:00.0: cvl_0_0 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:13.976 Found net devices under 0000:31:00.1: cvl_0_1 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.764 ms 00:08:13.976 00:08:13.976 --- 10.0.0.2 ping statistics --- 00:08:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.976 rtt min/avg/max/mdev = 0.764/0.764/0.764/0.000 ms 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:08:13.976 00:08:13.976 --- 10.0.0.1 ping statistics --- 00:08:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.976 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:13.976 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=333038 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 333038 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 333038 ']' 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:13.977 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.977 [2024-06-07 14:10:37.510245] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:08:13.977 [2024-06-07 14:10:37.510282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.977 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.977 [2024-06-07 14:10:37.570973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.977 [2024-06-07 14:10:37.603430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.977 [2024-06-07 14:10:37.603467] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.977 [2024-06-07 14:10:37.603474] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.977 [2024-06-07 14:10:37.603481] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.977 [2024-06-07 14:10:37.603486] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.977 [2024-06-07 14:10:37.603620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.977 [2024-06-07 14:10:37.603738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.977 [2024-06-07 14:10:37.603894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.977 [2024-06-07 14:10:37.603895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 [2024-06-07 14:10:37.738004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 Null1 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 [2024-06-07 14:10:37.798357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 Null2 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 Null3 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.239 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 Null4 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.501 14:10:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:08:14.501 00:08:14.501 Discovery Log Number of Records 6, Generation counter 6 00:08:14.501 =====Discovery Log Entry 0====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: current discovery subsystem 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4420 00:08:14.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: explicit discovery connections, duplicate discovery information 00:08:14.501 sectype: none 00:08:14.501 =====Discovery Log Entry 1====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: nvme subsystem 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4420 00:08:14.501 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: none 00:08:14.501 sectype: none 00:08:14.501 =====Discovery Log Entry 2====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: nvme subsystem 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4420 00:08:14.501 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: none 00:08:14.501 sectype: none 00:08:14.501 =====Discovery Log Entry 3====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: nvme subsystem 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4420 00:08:14.501 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: none 00:08:14.501 sectype: none 00:08:14.501 =====Discovery Log Entry 4====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: nvme subsystem 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4420 00:08:14.501 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: none 00:08:14.501 sectype: none 00:08:14.501 =====Discovery Log Entry 5====== 00:08:14.501 trtype: tcp 00:08:14.501 adrfam: ipv4 00:08:14.501 subtype: discovery subsystem referral 00:08:14.501 treq: not required 00:08:14.501 portid: 0 00:08:14.501 trsvcid: 4430 00:08:14.501 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:14.501 traddr: 10.0.0.2 00:08:14.501 eflags: none 00:08:14.501 sectype: none 00:08:14.501 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:14.501 Perform nvmf subsystem discovery via RPC 00:08:14.501 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:14.501 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.501 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.501 [ 00:08:14.501 { 00:08:14.501 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:14.501 "subtype": "Discovery", 00:08:14.501 "listen_addresses": [ 00:08:14.501 { 00:08:14.501 "trtype": "TCP", 00:08:14.501 "adrfam": "IPv4", 00:08:14.501 "traddr": "10.0.0.2", 00:08:14.501 "trsvcid": "4420" 00:08:14.501 } 00:08:14.501 ], 00:08:14.501 "allow_any_host": true, 00:08:14.502 "hosts": [] 00:08:14.502 }, 00:08:14.502 { 00:08:14.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.502 "subtype": "NVMe", 00:08:14.502 "listen_addresses": [ 00:08:14.502 { 00:08:14.502 "trtype": "TCP", 00:08:14.502 "adrfam": "IPv4", 00:08:14.502 "traddr": "10.0.0.2", 00:08:14.502 "trsvcid": "4420" 00:08:14.502 } 00:08:14.502 ], 00:08:14.502 "allow_any_host": true, 00:08:14.502 "hosts": [], 00:08:14.502 "serial_number": "SPDK00000000000001", 00:08:14.502 "model_number": "SPDK bdev Controller", 00:08:14.502 "max_namespaces": 32, 00:08:14.502 "min_cntlid": 1, 00:08:14.502 "max_cntlid": 65519, 00:08:14.502 "namespaces": [ 00:08:14.502 { 00:08:14.502 "nsid": 1, 00:08:14.502 "bdev_name": "Null1", 00:08:14.502 "name": "Null1", 00:08:14.502 "nguid": "C0A071875A89405D8C0924D39D92C48A", 00:08:14.502 "uuid": "c0a07187-5a89-405d-8c09-24d39d92c48a" 00:08:14.502 } 00:08:14.502 ] 00:08:14.502 }, 00:08:14.502 { 00:08:14.502 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:14.502 "subtype": "NVMe", 00:08:14.502 "listen_addresses": [ 00:08:14.502 { 00:08:14.502 "trtype": "TCP", 00:08:14.502 "adrfam": "IPv4", 00:08:14.502 "traddr": "10.0.0.2", 00:08:14.502 "trsvcid": "4420" 00:08:14.502 } 00:08:14.502 ], 00:08:14.502 "allow_any_host": true, 00:08:14.502 "hosts": [], 00:08:14.502 "serial_number": "SPDK00000000000002", 00:08:14.502 "model_number": "SPDK bdev Controller", 00:08:14.502 "max_namespaces": 32, 00:08:14.502 "min_cntlid": 1, 00:08:14.502 "max_cntlid": 65519, 00:08:14.502 "namespaces": [ 00:08:14.502 { 00:08:14.502 "nsid": 1, 00:08:14.502 "bdev_name": "Null2", 00:08:14.502 "name": "Null2", 00:08:14.502 "nguid": "4EB2E360EE3D469ABA26FB1889870745", 00:08:14.502 "uuid": "4eb2e360-ee3d-469a-ba26-fb1889870745" 00:08:14.502 } 00:08:14.502 ] 00:08:14.502 }, 00:08:14.502 { 00:08:14.502 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:14.502 "subtype": "NVMe", 00:08:14.502 "listen_addresses": [ 00:08:14.502 { 00:08:14.502 "trtype": "TCP", 00:08:14.502 "adrfam": "IPv4", 00:08:14.502 "traddr": "10.0.0.2", 00:08:14.502 "trsvcid": "4420" 00:08:14.502 } 00:08:14.502 ], 00:08:14.502 "allow_any_host": true, 00:08:14.502 "hosts": [], 00:08:14.502 "serial_number": "SPDK00000000000003", 00:08:14.502 "model_number": "SPDK bdev Controller", 00:08:14.502 "max_namespaces": 32, 00:08:14.502 "min_cntlid": 1, 00:08:14.502 "max_cntlid": 65519, 00:08:14.502 "namespaces": [ 00:08:14.502 { 00:08:14.502 "nsid": 1, 00:08:14.502 "bdev_name": "Null3", 00:08:14.502 "name": "Null3", 00:08:14.502 "nguid": "234AC4C45545462884B57E966DA71938", 00:08:14.502 "uuid": "234ac4c4-5545-4628-84b5-7e966da71938" 00:08:14.502 } 00:08:14.502 ] 00:08:14.502 }, 00:08:14.502 { 00:08:14.502 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:14.502 "subtype": "NVMe", 00:08:14.502 "listen_addresses": [ 00:08:14.502 { 00:08:14.502 "trtype": "TCP", 00:08:14.502 "adrfam": "IPv4", 00:08:14.502 "traddr": "10.0.0.2", 00:08:14.502 "trsvcid": "4420" 00:08:14.502 } 00:08:14.502 ], 00:08:14.502 "allow_any_host": true, 00:08:14.502 "hosts": [], 00:08:14.502 "serial_number": "SPDK00000000000004", 00:08:14.502 "model_number": "SPDK bdev Controller", 00:08:14.502 "max_namespaces": 32, 00:08:14.502 "min_cntlid": 1, 00:08:14.502 "max_cntlid": 65519, 00:08:14.502 "namespaces": [ 00:08:14.502 { 00:08:14.502 "nsid": 1, 00:08:14.502 "bdev_name": "Null4", 00:08:14.502 "name": "Null4", 00:08:14.502 "nguid": "E4E1D170A3A649E5BB324C56F5894198", 00:08:14.502 "uuid": "e4e1d170-a3a6-49e5-bb32-4c56f5894198" 00:08:14.502 } 00:08:14.502 ] 00:08:14.502 } 00:08:14.502 ] 00:08:14.502 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.502 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:14.762 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.763 rmmod nvme_tcp 00:08:14.763 rmmod nvme_fabrics 00:08:14.763 rmmod nvme_keyring 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 333038 ']' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 333038 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 333038 ']' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 333038 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:14.763 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 333038 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 333038' 00:08:15.024 killing process with pid 333038 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 333038 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 333038 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.024 14:10:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.566 14:10:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:17.566 00:08:17.566 real 0m11.304s 00:08:17.566 user 0m6.022s 00:08:17.566 sys 0m6.217s 00:08:17.566 14:10:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:17.566 14:10:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.566 ************************************ 00:08:17.566 END TEST nvmf_target_discovery 00:08:17.566 ************************************ 00:08:17.566 14:10:40 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.566 14:10:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:17.566 14:10:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:17.566 14:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.566 ************************************ 00:08:17.566 START TEST nvmf_referrals 00:08:17.566 ************************************ 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.566 * Looking for test storage... 00:08:17.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.566 14:10:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.567 14:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.732 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:25.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:25.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:25.733 Found net devices under 0000:31:00.0: cvl_0_0 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:25.733 Found net devices under 0000:31:00.1: cvl_0_1 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.866 ms 00:08:25.733 00:08:25.733 --- 10.0.0.2 ping statistics --- 00:08:25.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.733 rtt min/avg/max/mdev = 0.866/0.866/0.866/0.000 ms 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:25.733 00:08:25.733 --- 10.0.0.1 ping statistics --- 00:08:25.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.733 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=337948 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 337948 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 337948 ']' 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.733 14:10:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.733 [2024-06-07 14:10:48.780226] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:08:25.733 [2024-06-07 14:10:48.780295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.733 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.733 [2024-06-07 14:10:48.857693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.733 [2024-06-07 14:10:48.898358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.733 [2024-06-07 14:10:48.898401] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.733 [2024-06-07 14:10:48.898408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.733 [2024-06-07 14:10:48.898415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.733 [2024-06-07 14:10:48.898421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.733 [2024-06-07 14:10:48.898563] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.733 [2024-06-07 14:10:48.898685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.733 [2024-06-07 14:10:48.898843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.733 [2024-06-07 14:10:48.898843] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 [2024-06-07 14:10:49.607892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 [2024-06-07 14:10:49.620041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.010 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.270 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.271 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.531 14:10:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.531 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.791 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.052 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.312 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:27.313 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:27.572 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.573 14:10:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.573 rmmod nvme_tcp 00:08:27.573 rmmod nvme_fabrics 00:08:27.573 rmmod nvme_keyring 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 337948 ']' 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 337948 ']' 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 337948' 00:08:27.573 killing process with pid 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 337948 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.573 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.832 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.832 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.832 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.833 14:10:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.833 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.833 14:10:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.742 14:10:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.742 00:08:29.742 real 0m12.605s 00:08:29.742 user 0m12.358s 00:08:29.742 sys 0m6.364s 00:08:29.742 14:10:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.742 14:10:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.742 ************************************ 00:08:29.742 END TEST nvmf_referrals 00:08:29.742 ************************************ 00:08:29.742 14:10:53 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.742 14:10:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:08:29.742 14:10:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:29.742 14:10:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.742 ************************************ 00:08:29.742 START TEST nvmf_connect_disconnect 00:08:29.742 ************************************ 00:08:29.742 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:30.002 * Looking for test storage... 00:08:30.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.002 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.003 14:10:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.139 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.139 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.139 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.139 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.139 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:38.140 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:38.140 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:38.140 Found net devices under 0000:31:00.0: cvl_0_0 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:38.140 Found net devices under 0000:31:00.1: cvl_0_1 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:08:38.140 00:08:38.140 --- 10.0.0.2 ping statistics --- 00:08:38.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.140 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:08:38.140 00:08:38.140 --- 10.0.0.1 ping statistics --- 00:08:38.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.140 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=343190 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 343190 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 343190 ']' 00:08:38.140 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.141 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:38.141 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.141 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:38.141 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:38.141 14:11:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.141 [2024-06-07 14:11:01.618806] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:08:38.141 [2024-06-07 14:11:01.618857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.141 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.141 [2024-06-07 14:11:01.690676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.141 [2024-06-07 14:11:01.724957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.141 [2024-06-07 14:11:01.724992] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.141 [2024-06-07 14:11:01.725000] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.141 [2024-06-07 14:11:01.725006] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.141 [2024-06-07 14:11:01.725011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.141 [2024-06-07 14:11:01.725152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.141 [2024-06-07 14:11:01.725348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.141 [2024-06-07 14:11:01.725349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.141 [2024-06-07 14:11:01.725200] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 [2024-06-07 14:11:02.443858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.083 [2024-06-07 14:11:02.503110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:39.083 14:11:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:41.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.170 rmmod nvme_tcp 00:12:28.170 rmmod nvme_fabrics 00:12:28.170 rmmod nvme_keyring 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 343190 ']' 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 343190 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 343190 ']' 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 343190 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 343190 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 343190' 00:12:28.170 killing process with pid 343190 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 343190 00:12:28.170 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 343190 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.432 14:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.345 14:14:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.345 00:12:30.345 real 4m0.590s 00:12:30.345 user 15m15.368s 00:12:30.345 sys 0m19.206s 00:12:30.345 14:14:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:30.345 14:14:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:30.345 ************************************ 00:12:30.345 END TEST nvmf_connect_disconnect 00:12:30.345 ************************************ 00:12:30.606 14:14:53 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.606 14:14:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:30.606 14:14:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:30.606 14:14:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.606 ************************************ 00:12:30.606 START TEST nvmf_multitarget 00:12:30.606 ************************************ 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:30.607 * Looking for test storage... 00:12:30.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.607 14:14:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.820 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.820 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:38.821 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:38.821 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:38.821 Found net devices under 0000:31:00.0: cvl_0_0 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:38.821 Found net devices under 0000:31:00.1: cvl_0_1 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.821 14:15:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:12:38.821 00:12:38.821 --- 10.0.0.2 ping statistics --- 00:12:38.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.821 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:38.821 00:12:38.821 --- 10.0.0.1 ping statistics --- 00:12:38.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.821 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.821 14:15:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=394644 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 394644 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 394644 ']' 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:38.822 14:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:38.822 [2024-06-07 14:15:02.308247] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:12:38.822 [2024-06-07 14:15:02.308323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.822 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.822 [2024-06-07 14:15:02.386465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.822 [2024-06-07 14:15:02.426836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.822 [2024-06-07 14:15:02.426879] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.822 [2024-06-07 14:15:02.426886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.822 [2024-06-07 14:15:02.426893] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.822 [2024-06-07 14:15:02.426899] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.822 [2024-06-07 14:15:02.427044] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.822 [2024-06-07 14:15:02.427299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.822 [2024-06-07 14:15:02.427299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.822 [2024-06-07 14:15:02.427144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:39.764 "nvmf_tgt_1" 00:12:39.764 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:39.764 "nvmf_tgt_2" 00:12:40.025 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.025 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:40.025 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:40.025 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:40.025 true 00:12:40.025 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:40.286 true 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:40.286 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:40.287 rmmod nvme_tcp 00:12:40.287 rmmod nvme_fabrics 00:12:40.287 rmmod nvme_keyring 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 394644 ']' 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 394644 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 394644 ']' 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 394644 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 394644 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 394644' 00:12:40.287 killing process with pid 394644 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 394644 00:12:40.287 14:15:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 394644 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.548 14:15:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.096 14:15:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:43.096 00:12:43.096 real 0m12.083s 00:12:43.096 user 0m9.477s 00:12:43.096 sys 0m6.403s 00:12:43.096 14:15:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:43.096 14:15:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 ************************************ 00:12:43.096 END TEST nvmf_multitarget 00:12:43.096 ************************************ 00:12:43.096 14:15:06 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:43.096 14:15:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:43.096 14:15:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:43.096 14:15:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 ************************************ 00:12:43.096 START TEST nvmf_rpc 00:12:43.096 ************************************ 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:43.096 * Looking for test storage... 00:12:43.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:43.096 14:15:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:51.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:51.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:51.233 Found net devices under 0000:31:00.0: cvl_0_0 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:51.233 Found net devices under 0000:31:00.1: cvl_0_1 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.233 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:12:51.234 00:12:51.234 --- 10.0.0.2 ping statistics --- 00:12:51.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.234 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:12:51.234 00:12:51.234 --- 10.0.0.1 ping statistics --- 00:12:51.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.234 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=400136 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 400136 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 400136 ']' 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:51.234 14:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.234 [2024-06-07 14:15:14.565497] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:12:51.234 [2024-06-07 14:15:14.565561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.234 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.234 [2024-06-07 14:15:14.645787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.234 [2024-06-07 14:15:14.685459] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.234 [2024-06-07 14:15:14.685502] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.234 [2024-06-07 14:15:14.685509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.234 [2024-06-07 14:15:14.685516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.234 [2024-06-07 14:15:14.685522] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.234 [2024-06-07 14:15:14.685666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.234 [2024-06-07 14:15:14.685784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.234 [2024-06-07 14:15:14.685943] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.234 [2024-06-07 14:15:14.685944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:51.806 "tick_rate": 2400000000, 00:12:51.806 "poll_groups": [ 00:12:51.806 { 00:12:51.806 "name": "nvmf_tgt_poll_group_000", 00:12:51.806 "admin_qpairs": 0, 00:12:51.806 "io_qpairs": 0, 00:12:51.806 "current_admin_qpairs": 0, 00:12:51.806 "current_io_qpairs": 0, 00:12:51.806 "pending_bdev_io": 0, 00:12:51.806 "completed_nvme_io": 0, 00:12:51.806 "transports": [] 00:12:51.806 }, 00:12:51.806 { 00:12:51.806 "name": "nvmf_tgt_poll_group_001", 00:12:51.806 "admin_qpairs": 0, 00:12:51.806 "io_qpairs": 0, 00:12:51.806 "current_admin_qpairs": 0, 00:12:51.806 "current_io_qpairs": 0, 00:12:51.806 "pending_bdev_io": 0, 00:12:51.806 "completed_nvme_io": 0, 00:12:51.806 "transports": [] 00:12:51.806 }, 00:12:51.806 { 00:12:51.806 "name": "nvmf_tgt_poll_group_002", 00:12:51.806 "admin_qpairs": 0, 00:12:51.806 "io_qpairs": 0, 00:12:51.806 "current_admin_qpairs": 0, 00:12:51.806 "current_io_qpairs": 0, 00:12:51.806 "pending_bdev_io": 0, 00:12:51.806 "completed_nvme_io": 0, 00:12:51.806 "transports": [] 00:12:51.806 }, 00:12:51.806 { 00:12:51.806 "name": "nvmf_tgt_poll_group_003", 00:12:51.806 "admin_qpairs": 0, 00:12:51.806 "io_qpairs": 0, 00:12:51.806 "current_admin_qpairs": 0, 00:12:51.806 "current_io_qpairs": 0, 00:12:51.806 "pending_bdev_io": 0, 00:12:51.806 "completed_nvme_io": 0, 00:12:51.806 "transports": [] 00:12:51.806 } 00:12:51.806 ] 00:12:51.806 }' 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:51.806 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 [2024-06-07 14:15:15.513263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:52.068 "tick_rate": 2400000000, 00:12:52.068 "poll_groups": [ 00:12:52.068 { 00:12:52.068 "name": "nvmf_tgt_poll_group_000", 00:12:52.068 "admin_qpairs": 0, 00:12:52.068 "io_qpairs": 0, 00:12:52.068 "current_admin_qpairs": 0, 00:12:52.068 "current_io_qpairs": 0, 00:12:52.068 "pending_bdev_io": 0, 00:12:52.068 "completed_nvme_io": 0, 00:12:52.068 "transports": [ 00:12:52.068 { 00:12:52.068 "trtype": "TCP" 00:12:52.068 } 00:12:52.068 ] 00:12:52.068 }, 00:12:52.068 { 00:12:52.068 "name": "nvmf_tgt_poll_group_001", 00:12:52.068 "admin_qpairs": 0, 00:12:52.068 "io_qpairs": 0, 00:12:52.068 "current_admin_qpairs": 0, 00:12:52.068 "current_io_qpairs": 0, 00:12:52.068 "pending_bdev_io": 0, 00:12:52.068 "completed_nvme_io": 0, 00:12:52.068 "transports": [ 00:12:52.068 { 00:12:52.068 "trtype": "TCP" 00:12:52.068 } 00:12:52.068 ] 00:12:52.068 }, 00:12:52.068 { 00:12:52.068 "name": "nvmf_tgt_poll_group_002", 00:12:52.068 "admin_qpairs": 0, 00:12:52.068 "io_qpairs": 0, 00:12:52.068 "current_admin_qpairs": 0, 00:12:52.068 "current_io_qpairs": 0, 00:12:52.068 "pending_bdev_io": 0, 00:12:52.068 "completed_nvme_io": 0, 00:12:52.068 "transports": [ 00:12:52.068 { 00:12:52.068 "trtype": "TCP" 00:12:52.068 } 00:12:52.068 ] 00:12:52.068 }, 00:12:52.068 { 00:12:52.068 "name": "nvmf_tgt_poll_group_003", 00:12:52.068 "admin_qpairs": 0, 00:12:52.068 "io_qpairs": 0, 00:12:52.068 "current_admin_qpairs": 0, 00:12:52.068 "current_io_qpairs": 0, 00:12:52.068 "pending_bdev_io": 0, 00:12:52.068 "completed_nvme_io": 0, 00:12:52.068 "transports": [ 00:12:52.068 { 00:12:52.068 "trtype": "TCP" 00:12:52.068 } 00:12:52.068 ] 00:12:52.068 } 00:12:52.068 ] 00:12:52.068 }' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 Malloc1 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.068 [2024-06-07 14:15:15.701002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.068 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:12:52.329 [2024-06-07 14:15:15.727726] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:12:52.329 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.329 could not add new controller: failed to write to nvme-fabrics device 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.329 14:15:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.713 14:15:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.713 14:15:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:53.713 14:15:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.713 14:15:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:53.713 14:15:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.257 [2024-06-07 14:15:19.415034] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:12:56.257 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:56.257 could not add new controller: failed to write to nvme-fabrics device 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:56.257 14:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.641 14:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.641 14:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:57.641 14:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.641 14:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:57.641 14:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:59.550 14:15:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.550 [2024-06-07 14:15:23.062951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:59.550 14:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.932 14:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.932 14:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:00.932 14:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.932 14:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:00.932 14:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 [2024-06-07 14:15:26.730293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:03.505 14:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.887 14:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.887 14:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:04.887 14:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.887 14:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:04.887 14:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:06.796 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 [2024-06-07 14:15:30.389265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:06.797 14:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.709 14:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.709 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:08.709 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.709 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:08.709 14:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:10.635 14:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.635 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.636 [2024-06-07 14:15:34.051157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.636 14:15:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.021 14:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.021 14:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:12.021 14:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.021 14:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:12.021 14:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:14.566 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 [2024-06-07 14:15:37.769477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:14.567 14:15:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.952 14:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.952 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:13:15.952 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.952 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:13:15.952 14:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 [2024-06-07 14:15:41.471645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:17.865 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 [2024-06-07 14:15:41.531784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 [2024-06-07 14:15:41.595962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.127 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 [2024-06-07 14:15:41.656145] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 [2024-06-07 14:15:41.716341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:18.128 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:18.389 "tick_rate": 2400000000, 00:13:18.389 "poll_groups": [ 00:13:18.389 { 00:13:18.389 "name": "nvmf_tgt_poll_group_000", 00:13:18.389 "admin_qpairs": 0, 00:13:18.389 "io_qpairs": 224, 00:13:18.389 "current_admin_qpairs": 0, 00:13:18.389 "current_io_qpairs": 0, 00:13:18.389 "pending_bdev_io": 0, 00:13:18.389 "completed_nvme_io": 273, 00:13:18.389 "transports": [ 00:13:18.389 { 00:13:18.389 "trtype": "TCP" 00:13:18.389 } 00:13:18.389 ] 00:13:18.389 }, 00:13:18.389 { 00:13:18.389 "name": "nvmf_tgt_poll_group_001", 00:13:18.389 "admin_qpairs": 1, 00:13:18.389 "io_qpairs": 223, 00:13:18.389 "current_admin_qpairs": 0, 00:13:18.389 "current_io_qpairs": 0, 00:13:18.389 "pending_bdev_io": 0, 00:13:18.389 "completed_nvme_io": 275, 00:13:18.389 "transports": [ 00:13:18.389 { 00:13:18.389 "trtype": "TCP" 00:13:18.389 } 00:13:18.389 ] 00:13:18.389 }, 00:13:18.389 { 00:13:18.389 "name": "nvmf_tgt_poll_group_002", 00:13:18.389 "admin_qpairs": 6, 00:13:18.389 "io_qpairs": 218, 00:13:18.389 "current_admin_qpairs": 0, 00:13:18.389 "current_io_qpairs": 0, 00:13:18.389 "pending_bdev_io": 0, 00:13:18.389 "completed_nvme_io": 418, 00:13:18.389 "transports": [ 00:13:18.389 { 00:13:18.389 "trtype": "TCP" 00:13:18.389 } 00:13:18.389 ] 00:13:18.389 }, 00:13:18.389 { 00:13:18.389 "name": "nvmf_tgt_poll_group_003", 00:13:18.389 "admin_qpairs": 0, 00:13:18.389 "io_qpairs": 224, 00:13:18.389 "current_admin_qpairs": 0, 00:13:18.389 "current_io_qpairs": 0, 00:13:18.389 "pending_bdev_io": 0, 00:13:18.389 "completed_nvme_io": 273, 00:13:18.389 "transports": [ 00:13:18.389 { 00:13:18.389 "trtype": "TCP" 00:13:18.389 } 00:13:18.389 ] 00:13:18.389 } 00:13:18.389 ] 00:13:18.389 }' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.389 rmmod nvme_tcp 00:13:18.389 rmmod nvme_fabrics 00:13:18.389 rmmod nvme_keyring 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 400136 ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 400136 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 400136 ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 400136 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 400136 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 400136' 00:13:18.389 killing process with pid 400136 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 400136 00:13:18.389 14:15:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 400136 00:13:18.649 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.649 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.649 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.649 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.649 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.650 14:15:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.650 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.650 14:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.561 14:15:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.561 00:13:20.561 real 0m37.988s 00:13:20.561 user 1m51.887s 00:13:20.561 sys 0m7.697s 00:13:20.561 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:20.561 14:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.561 ************************************ 00:13:20.561 END TEST nvmf_rpc 00:13:20.561 ************************************ 00:13:20.822 14:15:44 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:20.822 14:15:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:20.822 14:15:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:20.822 14:15:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:20.822 ************************************ 00:13:20.822 START TEST nvmf_invalid 00:13:20.822 ************************************ 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:20.822 * Looking for test storage... 00:13:20.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:20.822 14:15:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:28.995 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:28.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.995 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:28.996 Found net devices under 0000:31:00.0: cvl_0_0 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:28.996 Found net devices under 0000:31:00.1: cvl_0_1 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.996 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.824 ms 00:13:29.257 00:13:29.257 --- 10.0.0.2 ping statistics --- 00:13:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.257 rtt min/avg/max/mdev = 0.824/0.824/0.824/0.000 ms 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:13:29.257 00:13:29.257 --- 10.0.0.1 ping statistics --- 00:13:29.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.257 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=410346 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 410346 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 410346 ']' 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:29.257 14:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.258 14:15:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.258 [2024-06-07 14:15:52.758543] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:13:29.258 [2024-06-07 14:15:52.758615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.258 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.258 [2024-06-07 14:15:52.836784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.258 [2024-06-07 14:15:52.877482] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.258 [2024-06-07 14:15:52.877525] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.258 [2024-06-07 14:15:52.877533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.258 [2024-06-07 14:15:52.877539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.258 [2024-06-07 14:15:52.877545] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.258 [2024-06-07 14:15:52.877691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.258 [2024-06-07 14:15:52.877812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.258 [2024-06-07 14:15:52.877931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.258 [2024-06-07 14:15:52.877932] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:30.199 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode181 00:13:30.200 [2024-06-07 14:15:53.706141] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:30.200 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:30.200 { 00:13:30.200 "nqn": "nqn.2016-06.io.spdk:cnode181", 00:13:30.200 "tgt_name": "foobar", 00:13:30.200 "method": "nvmf_create_subsystem", 00:13:30.200 "req_id": 1 00:13:30.200 } 00:13:30.200 Got JSON-RPC error response 00:13:30.200 response: 00:13:30.200 { 00:13:30.200 "code": -32603, 00:13:30.200 "message": "Unable to find target foobar" 00:13:30.200 }' 00:13:30.200 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:30.200 { 00:13:30.200 "nqn": "nqn.2016-06.io.spdk:cnode181", 00:13:30.200 "tgt_name": "foobar", 00:13:30.200 "method": "nvmf_create_subsystem", 00:13:30.200 "req_id": 1 00:13:30.200 } 00:13:30.200 Got JSON-RPC error response 00:13:30.200 response: 00:13:30.200 { 00:13:30.200 "code": -32603, 00:13:30.200 "message": "Unable to find target foobar" 00:13:30.200 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:30.200 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:30.200 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31861 00:13:30.460 [2024-06-07 14:15:53.882761] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31861: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.460 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:30.460 { 00:13:30.460 "nqn": "nqn.2016-06.io.spdk:cnode31861", 00:13:30.460 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.460 "method": "nvmf_create_subsystem", 00:13:30.460 "req_id": 1 00:13:30.460 } 00:13:30.460 Got JSON-RPC error response 00:13:30.460 response: 00:13:30.460 { 00:13:30.460 "code": -32602, 00:13:30.460 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.460 }' 00:13:30.460 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:30.460 { 00:13:30.460 "nqn": "nqn.2016-06.io.spdk:cnode31861", 00:13:30.460 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.460 "method": "nvmf_create_subsystem", 00:13:30.460 "req_id": 1 00:13:30.460 } 00:13:30.460 Got JSON-RPC error response 00:13:30.460 response: 00:13:30.460 { 00:13:30.460 "code": -32602, 00:13:30.460 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.460 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.460 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.460 14:15:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7365 00:13:30.460 [2024-06-07 14:15:54.059274] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7365: invalid model number 'SPDK_Controller' 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:30.460 { 00:13:30.460 "nqn": "nqn.2016-06.io.spdk:cnode7365", 00:13:30.460 "model_number": "SPDK_Controller\u001f", 00:13:30.460 "method": "nvmf_create_subsystem", 00:13:30.460 "req_id": 1 00:13:30.460 } 00:13:30.460 Got JSON-RPC error response 00:13:30.460 response: 00:13:30.460 { 00:13:30.460 "code": -32602, 00:13:30.460 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.460 }' 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:30.460 { 00:13:30.460 "nqn": "nqn.2016-06.io.spdk:cnode7365", 00:13:30.460 "model_number": "SPDK_Controller\u001f", 00:13:30.460 "method": "nvmf_create_subsystem", 00:13:30.460 "req_id": 1 00:13:30.460 } 00:13:30.460 Got JSON-RPC error response 00:13:30.460 response: 00:13:30.460 { 00:13:30.460 "code": -32602, 00:13:30.460 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.460 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.460 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.461 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:30.720 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+E]5!:1adEqnwlmxi7!' 00:13:30.721 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+E]5!:1adEqnwlmxi7!' nqn.2016-06.io.spdk:cnode23132 00:13:30.982 [2024-06-07 14:15:54.392366] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23132: invalid serial number '+E]5!:1adEqnwlmxi7!' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:30.982 { 00:13:30.982 "nqn": "nqn.2016-06.io.spdk:cnode23132", 00:13:30.982 "serial_number": "+E]5!:1adEqnw\u007flmxi7!\u007f", 00:13:30.982 "method": "nvmf_create_subsystem", 00:13:30.982 "req_id": 1 00:13:30.982 } 00:13:30.982 Got JSON-RPC error response 00:13:30.982 response: 00:13:30.982 { 00:13:30.982 "code": -32602, 00:13:30.982 "message": "Invalid SN +E]5!:1adEqnw\u007flmxi7!\u007f" 00:13:30.982 }' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:30.982 { 00:13:30.982 "nqn": "nqn.2016-06.io.spdk:cnode23132", 00:13:30.982 "serial_number": "+E]5!:1adEqnw\u007flmxi7!\u007f", 00:13:30.982 "method": "nvmf_create_subsystem", 00:13:30.982 "req_id": 1 00:13:30.982 } 00:13:30.982 Got JSON-RPC error response 00:13:30.982 response: 00:13:30.982 { 00:13:30.982 "code": -32602, 00:13:30.982 "message": "Invalid SN +E]5!:1adEqnw\u007flmxi7!\u007f" 00:13:30.982 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.982 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.983 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '[-ll3"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5' 00:13:31.244 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[-ll3"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5' nqn.2016-06.io.spdk:cnode1746 00:13:31.244 [2024-06-07 14:15:54.873902] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1746: invalid model number '[-ll3"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5' 00:13:31.504 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:31.504 { 00:13:31.504 "nqn": "nqn.2016-06.io.spdk:cnode1746", 00:13:31.504 "model_number": "[-ll3\"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5", 00:13:31.504 "method": "nvmf_create_subsystem", 00:13:31.504 "req_id": 1 00:13:31.504 } 00:13:31.504 Got JSON-RPC error response 00:13:31.504 response: 00:13:31.504 { 00:13:31.504 "code": -32602, 00:13:31.504 "message": "Invalid MN [-ll3\"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5" 00:13:31.504 }' 00:13:31.504 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:31.504 { 00:13:31.504 "nqn": "nqn.2016-06.io.spdk:cnode1746", 00:13:31.504 "model_number": "[-ll3\"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5", 00:13:31.504 "method": "nvmf_create_subsystem", 00:13:31.504 "req_id": 1 00:13:31.504 } 00:13:31.504 Got JSON-RPC error response 00:13:31.504 response: 00:13:31.504 { 00:13:31.504 "code": -32602, 00:13:31.504 "message": "Invalid MN [-ll3\"cz!Pcc,Dv|y;+UZPK}c|0kp8Y?;M]V}&+m5" 00:13:31.504 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.504 14:15:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.504 [2024-06-07 14:15:55.046523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.504 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.764 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:31.764 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:31.764 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:31.764 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:31.764 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.764 [2024-06-07 14:15:55.399613] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:32.023 { 00:13:32.023 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:32.023 "listen_address": { 00:13:32.023 "trtype": "tcp", 00:13:32.023 "traddr": "", 00:13:32.023 "trsvcid": "4421" 00:13:32.023 }, 00:13:32.023 "method": "nvmf_subsystem_remove_listener", 00:13:32.023 "req_id": 1 00:13:32.023 } 00:13:32.023 Got JSON-RPC error response 00:13:32.023 response: 00:13:32.023 { 00:13:32.023 "code": -32602, 00:13:32.023 "message": "Invalid parameters" 00:13:32.023 }' 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:32.023 { 00:13:32.023 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:32.023 "listen_address": { 00:13:32.023 "trtype": "tcp", 00:13:32.023 "traddr": "", 00:13:32.023 "trsvcid": "4421" 00:13:32.023 }, 00:13:32.023 "method": "nvmf_subsystem_remove_listener", 00:13:32.023 "req_id": 1 00:13:32.023 } 00:13:32.023 Got JSON-RPC error response 00:13:32.023 response: 00:13:32.023 { 00:13:32.023 "code": -32602, 00:13:32.023 "message": "Invalid parameters" 00:13:32.023 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4379 -i 0 00:13:32.023 [2024-06-07 14:15:55.572121] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4379: invalid cntlid range [0-65519] 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:32.023 { 00:13:32.023 "nqn": "nqn.2016-06.io.spdk:cnode4379", 00:13:32.023 "min_cntlid": 0, 00:13:32.023 "method": "nvmf_create_subsystem", 00:13:32.023 "req_id": 1 00:13:32.023 } 00:13:32.023 Got JSON-RPC error response 00:13:32.023 response: 00:13:32.023 { 00:13:32.023 "code": -32602, 00:13:32.023 "message": "Invalid cntlid range [0-65519]" 00:13:32.023 }' 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:32.023 { 00:13:32.023 "nqn": "nqn.2016-06.io.spdk:cnode4379", 00:13:32.023 "min_cntlid": 0, 00:13:32.023 "method": "nvmf_create_subsystem", 00:13:32.023 "req_id": 1 00:13:32.023 } 00:13:32.023 Got JSON-RPC error response 00:13:32.023 response: 00:13:32.023 { 00:13:32.023 "code": -32602, 00:13:32.023 "message": "Invalid cntlid range [0-65519]" 00:13:32.023 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.023 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23444 -i 65520 00:13:32.283 [2024-06-07 14:15:55.744680] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23444: invalid cntlid range [65520-65519] 00:13:32.283 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:32.283 { 00:13:32.283 "nqn": "nqn.2016-06.io.spdk:cnode23444", 00:13:32.283 "min_cntlid": 65520, 00:13:32.283 "method": "nvmf_create_subsystem", 00:13:32.283 "req_id": 1 00:13:32.283 } 00:13:32.283 Got JSON-RPC error response 00:13:32.283 response: 00:13:32.283 { 00:13:32.283 "code": -32602, 00:13:32.283 "message": "Invalid cntlid range [65520-65519]" 00:13:32.283 }' 00:13:32.283 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:32.283 { 00:13:32.283 "nqn": "nqn.2016-06.io.spdk:cnode23444", 00:13:32.283 "min_cntlid": 65520, 00:13:32.283 "method": "nvmf_create_subsystem", 00:13:32.283 "req_id": 1 00:13:32.283 } 00:13:32.283 Got JSON-RPC error response 00:13:32.283 response: 00:13:32.283 { 00:13:32.283 "code": -32602, 00:13:32.283 "message": "Invalid cntlid range [65520-65519]" 00:13:32.283 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.283 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19347 -I 0 00:13:32.283 [2024-06-07 14:15:55.917269] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19347: invalid cntlid range [1-0] 00:13:32.543 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:32.543 { 00:13:32.543 "nqn": "nqn.2016-06.io.spdk:cnode19347", 00:13:32.543 "max_cntlid": 0, 00:13:32.543 "method": "nvmf_create_subsystem", 00:13:32.543 "req_id": 1 00:13:32.543 } 00:13:32.543 Got JSON-RPC error response 00:13:32.543 response: 00:13:32.543 { 00:13:32.543 "code": -32602, 00:13:32.543 "message": "Invalid cntlid range [1-0]" 00:13:32.543 }' 00:13:32.543 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:32.543 { 00:13:32.543 "nqn": "nqn.2016-06.io.spdk:cnode19347", 00:13:32.543 "max_cntlid": 0, 00:13:32.543 "method": "nvmf_create_subsystem", 00:13:32.543 "req_id": 1 00:13:32.543 } 00:13:32.543 Got JSON-RPC error response 00:13:32.543 response: 00:13:32.543 { 00:13:32.543 "code": -32602, 00:13:32.543 "message": "Invalid cntlid range [1-0]" 00:13:32.543 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.543 14:15:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5041 -I 65520 00:13:32.543 [2024-06-07 14:15:56.081769] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5041: invalid cntlid range [1-65520] 00:13:32.543 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:32.543 { 00:13:32.543 "nqn": "nqn.2016-06.io.spdk:cnode5041", 00:13:32.543 "max_cntlid": 65520, 00:13:32.543 "method": "nvmf_create_subsystem", 00:13:32.543 "req_id": 1 00:13:32.543 } 00:13:32.543 Got JSON-RPC error response 00:13:32.543 response: 00:13:32.543 { 00:13:32.543 "code": -32602, 00:13:32.543 "message": "Invalid cntlid range [1-65520]" 00:13:32.543 }' 00:13:32.543 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:32.543 { 00:13:32.543 "nqn": "nqn.2016-06.io.spdk:cnode5041", 00:13:32.543 "max_cntlid": 65520, 00:13:32.543 "method": "nvmf_create_subsystem", 00:13:32.543 "req_id": 1 00:13:32.543 } 00:13:32.543 Got JSON-RPC error response 00:13:32.543 response: 00:13:32.543 { 00:13:32.543 "code": -32602, 00:13:32.543 "message": "Invalid cntlid range [1-65520]" 00:13:32.543 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.543 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6740 -i 6 -I 5 00:13:32.804 [2024-06-07 14:15:56.250321] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6740: invalid cntlid range [6-5] 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:32.804 { 00:13:32.804 "nqn": "nqn.2016-06.io.spdk:cnode6740", 00:13:32.804 "min_cntlid": 6, 00:13:32.804 "max_cntlid": 5, 00:13:32.804 "method": "nvmf_create_subsystem", 00:13:32.804 "req_id": 1 00:13:32.804 } 00:13:32.804 Got JSON-RPC error response 00:13:32.804 response: 00:13:32.804 { 00:13:32.804 "code": -32602, 00:13:32.804 "message": "Invalid cntlid range [6-5]" 00:13:32.804 }' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:32.804 { 00:13:32.804 "nqn": "nqn.2016-06.io.spdk:cnode6740", 00:13:32.804 "min_cntlid": 6, 00:13:32.804 "max_cntlid": 5, 00:13:32.804 "method": "nvmf_create_subsystem", 00:13:32.804 "req_id": 1 00:13:32.804 } 00:13:32.804 Got JSON-RPC error response 00:13:32.804 response: 00:13:32.804 { 00:13:32.804 "code": -32602, 00:13:32.804 "message": "Invalid cntlid range [6-5]" 00:13:32.804 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:32.804 { 00:13:32.804 "name": "foobar", 00:13:32.804 "method": "nvmf_delete_target", 00:13:32.804 "req_id": 1 00:13:32.804 } 00:13:32.804 Got JSON-RPC error response 00:13:32.804 response: 00:13:32.804 { 00:13:32.804 "code": -32602, 00:13:32.804 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:32.804 }' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:32.804 { 00:13:32.804 "name": "foobar", 00:13:32.804 "method": "nvmf_delete_target", 00:13:32.804 "req_id": 1 00:13:32.804 } 00:13:32.804 Got JSON-RPC error response 00:13:32.804 response: 00:13:32.804 { 00:13:32.804 "code": -32602, 00:13:32.804 "message": "The specified target doesn't exist, cannot delete it." 00:13:32.804 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:32.804 rmmod nvme_tcp 00:13:32.804 rmmod nvme_fabrics 00:13:32.804 rmmod nvme_keyring 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 410346 ']' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 410346 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@949 -- # '[' -z 410346 ']' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # kill -0 410346 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # uname 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:32.804 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 410346 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # echo 'killing process with pid 410346' 00:13:33.065 killing process with pid 410346 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@968 -- # kill 410346 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@973 -- # wait 410346 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.065 14:15:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.610 14:15:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:35.610 00:13:35.610 real 0m14.407s 00:13:35.610 user 0m19.357s 00:13:35.610 sys 0m7.072s 00:13:35.610 14:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:35.610 14:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.610 ************************************ 00:13:35.610 END TEST nvmf_invalid 00:13:35.610 ************************************ 00:13:35.610 14:15:58 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:35.610 14:15:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:35.610 14:15:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:35.610 14:15:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:35.610 ************************************ 00:13:35.610 START TEST nvmf_abort 00:13:35.610 ************************************ 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:35.610 * Looking for test storage... 00:13:35.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:35.610 14:15:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.748 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:43.749 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:43.749 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:43.749 Found net devices under 0000:31:00.0: cvl_0_0 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:43.749 Found net devices under 0000:31:00.1: cvl_0_1 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:13:43.749 00:13:43.749 --- 10.0.0.2 ping statistics --- 00:13:43.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.749 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:43.749 00:13:43.749 --- 10.0.0.1 ping statistics --- 00:13:43.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.749 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=415879 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 415879 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 415879 ']' 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:43.749 14:16:06 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:43.749 [2024-06-07 14:16:07.025894] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:13:43.749 [2024-06-07 14:16:07.025943] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.749 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.749 [2024-06-07 14:16:07.114906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.749 [2024-06-07 14:16:07.150487] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.749 [2024-06-07 14:16:07.150535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.749 [2024-06-07 14:16:07.150543] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.749 [2024-06-07 14:16:07.150550] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.749 [2024-06-07 14:16:07.150555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.749 [2024-06-07 14:16:07.150671] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.749 [2024-06-07 14:16:07.150828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.749 [2024-06-07 14:16:07.150828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 [2024-06-07 14:16:07.839228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 Malloc0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 Delay0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 [2024-06-07 14:16:07.916575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.320 14:16:07 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:44.320 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.581 [2024-06-07 14:16:07.984876] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:46.490 Initializing NVMe Controllers 00:13:46.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:46.490 controller IO queue size 128 less than required 00:13:46.490 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:46.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:46.490 Initialization complete. Launching workers. 00:13:46.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34926 00:13:46.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34987, failed to submit 62 00:13:46.490 success 34930, unsuccess 57, failed 0 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.490 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.490 rmmod nvme_tcp 00:13:46.750 rmmod nvme_fabrics 00:13:46.750 rmmod nvme_keyring 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 415879 ']' 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 415879 ']' 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 415879' 00:13:46.750 killing process with pid 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 415879 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.750 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.751 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.751 14:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.751 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.751 14:16:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.364 14:16:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.364 00:13:49.364 real 0m13.666s 00:13:49.364 user 0m13.671s 00:13:49.364 sys 0m6.813s 00:13:49.364 14:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:49.364 14:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:49.364 ************************************ 00:13:49.364 END TEST nvmf_abort 00:13:49.364 ************************************ 00:13:49.364 14:16:12 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.364 14:16:12 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:49.364 14:16:12 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:49.364 14:16:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.364 ************************************ 00:13:49.364 START TEST nvmf_ns_hotplug_stress 00:13:49.364 ************************************ 00:13:49.364 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.365 * Looking for test storage... 00:13:49.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.365 14:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.502 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:57.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:57.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:57.503 Found net devices under 0000:31:00.0: cvl_0_0 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:57.503 Found net devices under 0000:31:00.1: cvl_0_1 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:13:57.503 00:13:57.503 --- 10.0.0.2 ping statistics --- 00:13:57.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.503 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:13:57.503 00:13:57.503 --- 10.0.0.1 ping statistics --- 00:13:57.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.503 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=421244 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 421244 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 421244 ']' 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:57.503 14:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.503 [2024-06-07 14:16:20.819155] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:13:57.503 [2024-06-07 14:16:20.819226] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.503 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.503 [2024-06-07 14:16:20.914256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:57.503 [2024-06-07 14:16:20.961723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.503 [2024-06-07 14:16:20.961796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.503 [2024-06-07 14:16:20.961805] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.503 [2024-06-07 14:16:20.961811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.503 [2024-06-07 14:16:20.961817] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.503 [2024-06-07 14:16:20.961946] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.504 [2024-06-07 14:16:20.962106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.504 [2024-06-07 14:16:20.962106] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:58.074 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:58.334 [2024-06-07 14:16:21.770292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.334 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.334 14:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.593 [2024-06-07 14:16:22.111681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.593 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.854 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:58.854 Malloc0 00:13:58.854 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:59.114 Delay0 00:13:59.114 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.373 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:59.373 NULL1 00:13:59.373 14:16:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:59.633 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=421664 00:13:59.633 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:59.633 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:13:59.633 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.633 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.893 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.893 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:59.893 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:00.153 [2024-06-07 14:16:23.621687] bdev.c:5000:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:14:00.153 true 00:14:00.153 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:00.153 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.411 14:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.411 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:00.411 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:00.670 true 00:14:00.670 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:00.670 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.931 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.931 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:00.931 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:01.191 true 00:14:01.191 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:01.191 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.451 14:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.451 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:01.451 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:01.711 true 00:14:01.711 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:01.711 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.972 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.972 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:01.972 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:02.232 true 00:14:02.232 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:02.232 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.492 14:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.492 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:02.492 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:02.752 true 00:14:02.752 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:02.752 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.752 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.012 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:03.012 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:03.273 true 00:14:03.273 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:03.273 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.273 14:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.533 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:03.533 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:03.792 true 00:14:03.792 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:03.792 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.792 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.051 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:04.051 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:04.310 true 00:14:04.310 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:04.310 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.310 14:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.569 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:04.569 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:04.829 true 00:14:04.829 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:04.829 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.829 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.088 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:05.088 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:05.348 true 00:14:05.348 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:05.348 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.348 14:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.608 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:05.608 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:05.869 true 00:14:05.869 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:05.869 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.869 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.129 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:06.129 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:06.390 true 00:14:06.390 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:06.390 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.390 14:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.650 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:06.650 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:06.912 true 00:14:06.912 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:06.912 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.912 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.173 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:07.173 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:07.173 true 00:14:07.434 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:07.434 14:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.434 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.695 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:07.695 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:07.695 true 00:14:07.956 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:07.956 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.956 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.216 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:08.216 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:08.216 true 00:14:08.216 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:08.216 14:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.480 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.802 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:08.802 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:08.802 true 00:14:08.802 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:08.802 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.062 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.062 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:09.062 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:09.322 true 00:14:09.322 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:09.322 14:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.582 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.582 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:09.582 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:09.842 true 00:14:09.842 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:09.842 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.103 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.103 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:10.103 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:10.364 true 00:14:10.364 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:10.364 14:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.624 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.624 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:10.624 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:10.884 true 00:14:10.884 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:10.884 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.144 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.144 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:11.144 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:11.404 true 00:14:11.404 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:11.404 14:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.664 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.664 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:11.664 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:11.924 true 00:14:11.924 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:11.924 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.184 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.184 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:12.184 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:12.444 true 00:14:12.444 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:12.444 14:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.705 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.705 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:12.705 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:12.965 true 00:14:12.965 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:12.965 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.965 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.226 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:13.226 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:13.486 true 00:14:13.486 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:13.486 14:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.486 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.747 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:13.747 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:14.007 true 00:14:14.007 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:14.007 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.007 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.267 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:14.267 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:14.527 true 00:14:14.527 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:14.527 14:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.527 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.787 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:14.787 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:14.787 true 00:14:15.049 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:15.049 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.049 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.310 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:15.310 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:15.310 true 00:14:15.571 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:15.571 14:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.571 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.832 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:15.832 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:15.832 true 00:14:15.832 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:15.832 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.093 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.355 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:16.355 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:16.355 true 00:14:16.355 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:16.355 14:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.615 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.876 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:16.876 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:16.876 true 00:14:16.876 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:16.876 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.138 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.399 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:17.399 14:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:17.399 true 00:14:17.399 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:17.399 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.660 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.922 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:17.922 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:17.922 true 00:14:17.922 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:17.922 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.183 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.444 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:18.444 14:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:18.444 true 00:14:18.444 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:18.444 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.705 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.966 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:18.966 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:18.966 true 00:14:18.966 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:18.966 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.227 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.227 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:19.227 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:19.488 true 00:14:19.488 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:19.488 14:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.748 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.748 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:19.748 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:20.008 true 00:14:20.008 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:20.009 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.270 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.270 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:20.270 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:20.531 true 00:14:20.531 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:20.531 14:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.531 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.792 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:20.792 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:21.053 true 00:14:21.053 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:21.053 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.053 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.313 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:21.313 14:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:21.572 true 00:14:21.572 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:21.572 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.572 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.832 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:21.832 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:22.092 true 00:14:22.092 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:22.092 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.092 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.355 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:22.355 14:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:22.355 true 00:14:22.661 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:22.661 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.661 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.921 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:22.921 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:22.921 true 00:14:22.921 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:22.921 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.181 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.441 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:23.441 14:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:23.441 true 00:14:23.441 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:23.441 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.700 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.959 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:23.959 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:23.959 true 00:14:23.959 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:23.959 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.218 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.476 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:24.476 14:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:24.476 true 00:14:24.476 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:24.476 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.735 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.994 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:24.994 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:24.994 true 00:14:24.994 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:24.994 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.254 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.254 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:25.254 14:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:25.514 true 00:14:25.514 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:25.514 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.772 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.772 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:25.772 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:26.032 true 00:14:26.032 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:26.032 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.292 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.292 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:26.292 14:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:26.553 true 00:14:26.553 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:26.553 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.813 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.813 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:26.813 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:27.072 true 00:14:27.072 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:27.072 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.332 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.332 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:27.332 14:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:27.593 true 00:14:27.593 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:27.593 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.855 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.855 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:27.855 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:28.114 true 00:14:28.114 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:28.114 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.374 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.374 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:28.374 14:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:28.633 true 00:14:28.633 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:28.633 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.633 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.892 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:28.892 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:29.152 true 00:14:29.152 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:29.152 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.152 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.412 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:29.412 14:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:29.671 true 00:14:29.671 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:29.671 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.671 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.932 Initializing NVMe Controllers 00:14:29.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:29.932 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:14:29.932 Controller IO queue size 128, less than required. 00:14:29.932 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:29.932 WARNING: Some requested NVMe devices were skipped 00:14:29.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:29.932 Initialization complete. Launching workers. 00:14:29.932 ======================================================== 00:14:29.932 Latency(us) 00:14:29.932 Device Information : IOPS MiB/s Average min max 00:14:29.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30586.46 14.93 4184.73 1420.55 10666.38 00:14:29.932 ======================================================== 00:14:29.932 Total : 30586.46 14.93 4184.73 1420.55 10666.38 00:14:29.932 00:14:29.932 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:29.932 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:30.192 true 00:14:30.192 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 421664 00:14:30.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (421664) - No such process 00:14:30.192 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 421664 00:14:30.192 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.192 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.452 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:30.452 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:30.452 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:30.452 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:30.452 14:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:30.712 null0 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:30.712 null1 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:30.712 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:30.973 null2 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:30.973 null3 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:30.973 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:31.233 null4 00:14:31.233 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:31.233 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:31.233 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:31.493 null5 00:14:31.493 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:31.493 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:31.493 14:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:31.493 null6 00:14:31.493 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:31.493 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:31.493 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:31.754 null7 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.754 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 428152 428154 428155 428157 428159 428161 428163 428165 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.755 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.016 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:32.016 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.016 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:32.016 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.017 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:32.279 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:32.540 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:32.801 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.062 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.322 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:33.582 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:33.582 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.842 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.103 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.364 14:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.625 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:34.887 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.148 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.407 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:35.407 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:35.407 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:35.407 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.408 rmmod nvme_tcp 00:14:35.408 rmmod nvme_fabrics 00:14:35.408 rmmod nvme_keyring 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 421244 ']' 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 421244 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 421244 ']' 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 421244 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:35.408 14:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 421244 00:14:35.408 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:35.408 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:35.408 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 421244' 00:14:35.408 killing process with pid 421244 00:14:35.408 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 421244 00:14:35.408 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 421244 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.668 14:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.650 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.650 00:14:37.650 real 0m48.700s 00:14:37.650 user 3m14.860s 00:14:37.650 sys 0m17.533s 00:14:37.650 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:37.650 14:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.650 ************************************ 00:14:37.650 END TEST nvmf_ns_hotplug_stress 00:14:37.650 ************************************ 00:14:37.650 14:17:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:37.650 14:17:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:37.650 14:17:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:37.650 14:17:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.650 ************************************ 00:14:37.650 START TEST nvmf_connect_stress 00:14:37.650 ************************************ 00:14:37.650 14:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:37.912 * Looking for test storage... 00:14:37.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.912 14:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:46.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:46.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:46.056 Found net devices under 0000:31:00.0: cvl_0_0 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:46.056 Found net devices under 0000:31:00.1: cvl_0_1 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:46.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:14:46.056 00:14:46.056 --- 10.0.0.2 ping statistics --- 00:14:46.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.056 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:14:46.056 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:14:46.056 00:14:46.056 --- 10.0.0.1 ping statistics --- 00:14:46.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.057 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=433670 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 433670 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 433670 ']' 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:46.057 14:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.057 [2024-06-07 14:17:09.629031] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:14:46.057 [2024-06-07 14:17:09.629094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.057 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.317 [2024-06-07 14:17:09.725822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.317 [2024-06-07 14:17:09.773674] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.317 [2024-06-07 14:17:09.773734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.317 [2024-06-07 14:17:09.773745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.317 [2024-06-07 14:17:09.773753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.317 [2024-06-07 14:17:09.773759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.317 [2024-06-07 14:17:09.773889] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.317 [2024-06-07 14:17:09.774052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.317 [2024-06-07 14:17:09.774052] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 [2024-06-07 14:17:10.457857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 [2024-06-07 14:17:10.493315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.887 NULL1 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=434017 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.887 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.147 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.407 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.407 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:47.407 14:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.407 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.407 14:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.666 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:47.666 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:47.666 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.666 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.666 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.236 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.236 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:48.236 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.236 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.236 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.496 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.496 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:48.496 14:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.496 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.496 14:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.755 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:48.755 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:48.755 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.755 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:48.755 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.015 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.015 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:49.015 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.015 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.015 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.275 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.275 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:49.275 14:17:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.275 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.275 14:17:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.845 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.845 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:49.845 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.845 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.845 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.105 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.105 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:50.105 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.105 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.105 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.365 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.365 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:50.365 14:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.365 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.365 14:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.624 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.624 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:50.624 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.624 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.624 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.885 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:50.885 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:50.885 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.885 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:50.885 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.455 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.455 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:51.455 14:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.455 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.455 14:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.716 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.716 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:51.716 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.716 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.716 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.977 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:51.977 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:51.977 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.977 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:51.977 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.239 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.239 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:52.239 14:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.239 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.239 14:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.500 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:52.500 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:52.500 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.500 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:52.500 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.072 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.072 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:53.072 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.072 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.072 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.333 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.333 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:53.333 14:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.333 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.333 14:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.594 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.594 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:53.594 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.594 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.594 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.855 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:53.855 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:53.855 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.855 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:53.855 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.426 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:54.426 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:54.426 14:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.426 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:54.426 14:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.687 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:54.687 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:54.687 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.687 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:54.687 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.948 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:54.948 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:54.948 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.948 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:54.948 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.211 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.211 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:55.211 14:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.211 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.211 14:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.515 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.515 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:55.515 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.515 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.515 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.776 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:55.776 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:55.776 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.776 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:55.776 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.345 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.345 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:56.345 14:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.345 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.345 14:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.605 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.605 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:56.605 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.605 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.605 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.866 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:56.866 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:56.866 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.866 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:56.866 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:57.128 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 434017 00:14:57.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (434017) - No such process 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 434017 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.128 rmmod nvme_tcp 00:14:57.128 rmmod nvme_fabrics 00:14:57.128 rmmod nvme_keyring 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 433670 ']' 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 433670 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 433670 ']' 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 433670 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:14:57.128 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 433670 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 433670' 00:14:57.389 killing process with pid 433670 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 433670 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 433670 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.389 14:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.934 14:17:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.934 00:14:59.934 real 0m21.705s 00:14:59.934 user 0m42.334s 00:14:59.934 sys 0m9.360s 00:14:59.934 14:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:59.934 14:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 ************************************ 00:14:59.934 END TEST nvmf_connect_stress 00:14:59.934 ************************************ 00:14:59.934 14:17:23 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:59.934 14:17:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:59.934 14:17:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:59.934 14:17:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.934 ************************************ 00:14:59.934 START TEST nvmf_fused_ordering 00:14:59.934 ************************************ 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:59.934 * Looking for test storage... 00:14:59.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.934 14:17:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.935 14:17:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:08.073 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:08.073 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:08.073 Found net devices under 0000:31:00.0: cvl_0_0 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:08.073 Found net devices under 0000:31:00.1: cvl_0_1 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.073 14:17:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:08.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:15:08.073 00:15:08.073 --- 10.0.0.2 ping statistics --- 00:15:08.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.073 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:15:08.073 00:15:08.073 --- 10.0.0.1 ping statistics --- 00:15:08.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.073 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:08.073 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=440686 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 440686 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 440686 ']' 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:08.074 14:17:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.074 [2024-06-07 14:17:31.317465] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:15:08.074 [2024-06-07 14:17:31.317525] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.074 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.074 [2024-06-07 14:17:31.415719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.074 [2024-06-07 14:17:31.461463] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.074 [2024-06-07 14:17:31.461526] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.074 [2024-06-07 14:17:31.461535] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.074 [2024-06-07 14:17:31.461542] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.074 [2024-06-07 14:17:31.461548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.074 [2024-06-07 14:17:31.461575] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 [2024-06-07 14:17:32.152170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 [2024-06-07 14:17:32.176418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 NULL1 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.644 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:08.645 14:17:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:08.645 [2024-06-07 14:17:32.242830] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:15:08.645 [2024-06-07 14:17:32.242879] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440751 ] 00:15:08.645 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.215 Attached to nqn.2016-06.io.spdk:cnode1 00:15:09.215 Namespace ID: 1 size: 1GB 00:15:09.215 fused_ordering(0) 00:15:09.215 fused_ordering(1) 00:15:09.215 fused_ordering(2) 00:15:09.215 fused_ordering(3) 00:15:09.215 fused_ordering(4) 00:15:09.215 fused_ordering(5) 00:15:09.215 fused_ordering(6) 00:15:09.215 fused_ordering(7) 00:15:09.215 fused_ordering(8) 00:15:09.215 fused_ordering(9) 00:15:09.215 fused_ordering(10) 00:15:09.215 fused_ordering(11) 00:15:09.215 fused_ordering(12) 00:15:09.215 fused_ordering(13) 00:15:09.215 fused_ordering(14) 00:15:09.215 fused_ordering(15) 00:15:09.215 fused_ordering(16) 00:15:09.215 fused_ordering(17) 00:15:09.215 fused_ordering(18) 00:15:09.215 fused_ordering(19) 00:15:09.215 fused_ordering(20) 00:15:09.215 fused_ordering(21) 00:15:09.215 fused_ordering(22) 00:15:09.215 fused_ordering(23) 00:15:09.215 fused_ordering(24) 00:15:09.215 fused_ordering(25) 00:15:09.215 fused_ordering(26) 00:15:09.215 fused_ordering(27) 00:15:09.215 fused_ordering(28) 00:15:09.215 fused_ordering(29) 00:15:09.215 fused_ordering(30) 00:15:09.215 fused_ordering(31) 00:15:09.215 fused_ordering(32) 00:15:09.215 fused_ordering(33) 00:15:09.215 fused_ordering(34) 00:15:09.215 fused_ordering(35) 00:15:09.215 fused_ordering(36) 00:15:09.215 fused_ordering(37) 00:15:09.215 fused_ordering(38) 00:15:09.215 fused_ordering(39) 00:15:09.215 fused_ordering(40) 00:15:09.215 fused_ordering(41) 00:15:09.215 fused_ordering(42) 00:15:09.215 fused_ordering(43) 00:15:09.215 fused_ordering(44) 00:15:09.215 fused_ordering(45) 00:15:09.215 fused_ordering(46) 00:15:09.215 fused_ordering(47) 00:15:09.215 fused_ordering(48) 00:15:09.215 fused_ordering(49) 00:15:09.215 fused_ordering(50) 00:15:09.215 fused_ordering(51) 00:15:09.215 fused_ordering(52) 00:15:09.215 fused_ordering(53) 00:15:09.215 fused_ordering(54) 00:15:09.215 fused_ordering(55) 00:15:09.215 fused_ordering(56) 00:15:09.215 fused_ordering(57) 00:15:09.215 fused_ordering(58) 00:15:09.215 fused_ordering(59) 00:15:09.215 fused_ordering(60) 00:15:09.215 fused_ordering(61) 00:15:09.215 fused_ordering(62) 00:15:09.215 fused_ordering(63) 00:15:09.215 fused_ordering(64) 00:15:09.215 fused_ordering(65) 00:15:09.215 fused_ordering(66) 00:15:09.215 fused_ordering(67) 00:15:09.215 fused_ordering(68) 00:15:09.215 fused_ordering(69) 00:15:09.215 fused_ordering(70) 00:15:09.215 fused_ordering(71) 00:15:09.215 fused_ordering(72) 00:15:09.215 fused_ordering(73) 00:15:09.215 fused_ordering(74) 00:15:09.215 fused_ordering(75) 00:15:09.215 fused_ordering(76) 00:15:09.215 fused_ordering(77) 00:15:09.215 fused_ordering(78) 00:15:09.215 fused_ordering(79) 00:15:09.215 fused_ordering(80) 00:15:09.215 fused_ordering(81) 00:15:09.215 fused_ordering(82) 00:15:09.215 fused_ordering(83) 00:15:09.215 fused_ordering(84) 00:15:09.215 fused_ordering(85) 00:15:09.215 fused_ordering(86) 00:15:09.215 fused_ordering(87) 00:15:09.215 fused_ordering(88) 00:15:09.215 fused_ordering(89) 00:15:09.215 fused_ordering(90) 00:15:09.215 fused_ordering(91) 00:15:09.215 fused_ordering(92) 00:15:09.215 fused_ordering(93) 00:15:09.215 fused_ordering(94) 00:15:09.215 fused_ordering(95) 00:15:09.215 fused_ordering(96) 00:15:09.215 fused_ordering(97) 00:15:09.215 fused_ordering(98) 00:15:09.215 fused_ordering(99) 00:15:09.215 fused_ordering(100) 00:15:09.215 fused_ordering(101) 00:15:09.215 fused_ordering(102) 00:15:09.215 fused_ordering(103) 00:15:09.215 fused_ordering(104) 00:15:09.215 fused_ordering(105) 00:15:09.215 fused_ordering(106) 00:15:09.215 fused_ordering(107) 00:15:09.215 fused_ordering(108) 00:15:09.215 fused_ordering(109) 00:15:09.215 fused_ordering(110) 00:15:09.215 fused_ordering(111) 00:15:09.215 fused_ordering(112) 00:15:09.215 fused_ordering(113) 00:15:09.215 fused_ordering(114) 00:15:09.215 fused_ordering(115) 00:15:09.215 fused_ordering(116) 00:15:09.215 fused_ordering(117) 00:15:09.215 fused_ordering(118) 00:15:09.215 fused_ordering(119) 00:15:09.215 fused_ordering(120) 00:15:09.215 fused_ordering(121) 00:15:09.215 fused_ordering(122) 00:15:09.215 fused_ordering(123) 00:15:09.215 fused_ordering(124) 00:15:09.215 fused_ordering(125) 00:15:09.215 fused_ordering(126) 00:15:09.215 fused_ordering(127) 00:15:09.215 fused_ordering(128) 00:15:09.215 fused_ordering(129) 00:15:09.215 fused_ordering(130) 00:15:09.215 fused_ordering(131) 00:15:09.215 fused_ordering(132) 00:15:09.215 fused_ordering(133) 00:15:09.215 fused_ordering(134) 00:15:09.215 fused_ordering(135) 00:15:09.215 fused_ordering(136) 00:15:09.215 fused_ordering(137) 00:15:09.215 fused_ordering(138) 00:15:09.215 fused_ordering(139) 00:15:09.215 fused_ordering(140) 00:15:09.215 fused_ordering(141) 00:15:09.215 fused_ordering(142) 00:15:09.215 fused_ordering(143) 00:15:09.215 fused_ordering(144) 00:15:09.215 fused_ordering(145) 00:15:09.215 fused_ordering(146) 00:15:09.215 fused_ordering(147) 00:15:09.215 fused_ordering(148) 00:15:09.215 fused_ordering(149) 00:15:09.215 fused_ordering(150) 00:15:09.215 fused_ordering(151) 00:15:09.215 fused_ordering(152) 00:15:09.215 fused_ordering(153) 00:15:09.215 fused_ordering(154) 00:15:09.215 fused_ordering(155) 00:15:09.215 fused_ordering(156) 00:15:09.215 fused_ordering(157) 00:15:09.215 fused_ordering(158) 00:15:09.216 fused_ordering(159) 00:15:09.216 fused_ordering(160) 00:15:09.216 fused_ordering(161) 00:15:09.216 fused_ordering(162) 00:15:09.216 fused_ordering(163) 00:15:09.216 fused_ordering(164) 00:15:09.216 fused_ordering(165) 00:15:09.216 fused_ordering(166) 00:15:09.216 fused_ordering(167) 00:15:09.216 fused_ordering(168) 00:15:09.216 fused_ordering(169) 00:15:09.216 fused_ordering(170) 00:15:09.216 fused_ordering(171) 00:15:09.216 fused_ordering(172) 00:15:09.216 fused_ordering(173) 00:15:09.216 fused_ordering(174) 00:15:09.216 fused_ordering(175) 00:15:09.216 fused_ordering(176) 00:15:09.216 fused_ordering(177) 00:15:09.216 fused_ordering(178) 00:15:09.216 fused_ordering(179) 00:15:09.216 fused_ordering(180) 00:15:09.216 fused_ordering(181) 00:15:09.216 fused_ordering(182) 00:15:09.216 fused_ordering(183) 00:15:09.216 fused_ordering(184) 00:15:09.216 fused_ordering(185) 00:15:09.216 fused_ordering(186) 00:15:09.216 fused_ordering(187) 00:15:09.216 fused_ordering(188) 00:15:09.216 fused_ordering(189) 00:15:09.216 fused_ordering(190) 00:15:09.216 fused_ordering(191) 00:15:09.216 fused_ordering(192) 00:15:09.216 fused_ordering(193) 00:15:09.216 fused_ordering(194) 00:15:09.216 fused_ordering(195) 00:15:09.216 fused_ordering(196) 00:15:09.216 fused_ordering(197) 00:15:09.216 fused_ordering(198) 00:15:09.216 fused_ordering(199) 00:15:09.216 fused_ordering(200) 00:15:09.216 fused_ordering(201) 00:15:09.216 fused_ordering(202) 00:15:09.216 fused_ordering(203) 00:15:09.216 fused_ordering(204) 00:15:09.216 fused_ordering(205) 00:15:09.476 fused_ordering(206) 00:15:09.476 fused_ordering(207) 00:15:09.476 fused_ordering(208) 00:15:09.476 fused_ordering(209) 00:15:09.476 fused_ordering(210) 00:15:09.476 fused_ordering(211) 00:15:09.476 fused_ordering(212) 00:15:09.476 fused_ordering(213) 00:15:09.476 fused_ordering(214) 00:15:09.476 fused_ordering(215) 00:15:09.476 fused_ordering(216) 00:15:09.476 fused_ordering(217) 00:15:09.476 fused_ordering(218) 00:15:09.476 fused_ordering(219) 00:15:09.476 fused_ordering(220) 00:15:09.476 fused_ordering(221) 00:15:09.476 fused_ordering(222) 00:15:09.476 fused_ordering(223) 00:15:09.476 fused_ordering(224) 00:15:09.476 fused_ordering(225) 00:15:09.476 fused_ordering(226) 00:15:09.476 fused_ordering(227) 00:15:09.476 fused_ordering(228) 00:15:09.476 fused_ordering(229) 00:15:09.476 fused_ordering(230) 00:15:09.476 fused_ordering(231) 00:15:09.476 fused_ordering(232) 00:15:09.476 fused_ordering(233) 00:15:09.476 fused_ordering(234) 00:15:09.476 fused_ordering(235) 00:15:09.476 fused_ordering(236) 00:15:09.476 fused_ordering(237) 00:15:09.476 fused_ordering(238) 00:15:09.476 fused_ordering(239) 00:15:09.476 fused_ordering(240) 00:15:09.476 fused_ordering(241) 00:15:09.476 fused_ordering(242) 00:15:09.476 fused_ordering(243) 00:15:09.476 fused_ordering(244) 00:15:09.476 fused_ordering(245) 00:15:09.476 fused_ordering(246) 00:15:09.476 fused_ordering(247) 00:15:09.476 fused_ordering(248) 00:15:09.476 fused_ordering(249) 00:15:09.476 fused_ordering(250) 00:15:09.476 fused_ordering(251) 00:15:09.476 fused_ordering(252) 00:15:09.476 fused_ordering(253) 00:15:09.476 fused_ordering(254) 00:15:09.476 fused_ordering(255) 00:15:09.476 fused_ordering(256) 00:15:09.476 fused_ordering(257) 00:15:09.476 fused_ordering(258) 00:15:09.476 fused_ordering(259) 00:15:09.476 fused_ordering(260) 00:15:09.476 fused_ordering(261) 00:15:09.476 fused_ordering(262) 00:15:09.476 fused_ordering(263) 00:15:09.476 fused_ordering(264) 00:15:09.476 fused_ordering(265) 00:15:09.476 fused_ordering(266) 00:15:09.476 fused_ordering(267) 00:15:09.476 fused_ordering(268) 00:15:09.476 fused_ordering(269) 00:15:09.476 fused_ordering(270) 00:15:09.476 fused_ordering(271) 00:15:09.476 fused_ordering(272) 00:15:09.476 fused_ordering(273) 00:15:09.476 fused_ordering(274) 00:15:09.476 fused_ordering(275) 00:15:09.476 fused_ordering(276) 00:15:09.476 fused_ordering(277) 00:15:09.476 fused_ordering(278) 00:15:09.476 fused_ordering(279) 00:15:09.476 fused_ordering(280) 00:15:09.476 fused_ordering(281) 00:15:09.476 fused_ordering(282) 00:15:09.476 fused_ordering(283) 00:15:09.476 fused_ordering(284) 00:15:09.476 fused_ordering(285) 00:15:09.476 fused_ordering(286) 00:15:09.476 fused_ordering(287) 00:15:09.476 fused_ordering(288) 00:15:09.476 fused_ordering(289) 00:15:09.476 fused_ordering(290) 00:15:09.476 fused_ordering(291) 00:15:09.476 fused_ordering(292) 00:15:09.476 fused_ordering(293) 00:15:09.476 fused_ordering(294) 00:15:09.476 fused_ordering(295) 00:15:09.476 fused_ordering(296) 00:15:09.476 fused_ordering(297) 00:15:09.476 fused_ordering(298) 00:15:09.476 fused_ordering(299) 00:15:09.476 fused_ordering(300) 00:15:09.476 fused_ordering(301) 00:15:09.476 fused_ordering(302) 00:15:09.476 fused_ordering(303) 00:15:09.476 fused_ordering(304) 00:15:09.476 fused_ordering(305) 00:15:09.476 fused_ordering(306) 00:15:09.476 fused_ordering(307) 00:15:09.476 fused_ordering(308) 00:15:09.476 fused_ordering(309) 00:15:09.476 fused_ordering(310) 00:15:09.476 fused_ordering(311) 00:15:09.476 fused_ordering(312) 00:15:09.476 fused_ordering(313) 00:15:09.476 fused_ordering(314) 00:15:09.476 fused_ordering(315) 00:15:09.476 fused_ordering(316) 00:15:09.476 fused_ordering(317) 00:15:09.476 fused_ordering(318) 00:15:09.476 fused_ordering(319) 00:15:09.476 fused_ordering(320) 00:15:09.476 fused_ordering(321) 00:15:09.476 fused_ordering(322) 00:15:09.476 fused_ordering(323) 00:15:09.476 fused_ordering(324) 00:15:09.476 fused_ordering(325) 00:15:09.476 fused_ordering(326) 00:15:09.476 fused_ordering(327) 00:15:09.476 fused_ordering(328) 00:15:09.476 fused_ordering(329) 00:15:09.476 fused_ordering(330) 00:15:09.476 fused_ordering(331) 00:15:09.476 fused_ordering(332) 00:15:09.476 fused_ordering(333) 00:15:09.476 fused_ordering(334) 00:15:09.476 fused_ordering(335) 00:15:09.476 fused_ordering(336) 00:15:09.476 fused_ordering(337) 00:15:09.476 fused_ordering(338) 00:15:09.476 fused_ordering(339) 00:15:09.476 fused_ordering(340) 00:15:09.476 fused_ordering(341) 00:15:09.476 fused_ordering(342) 00:15:09.476 fused_ordering(343) 00:15:09.476 fused_ordering(344) 00:15:09.476 fused_ordering(345) 00:15:09.476 fused_ordering(346) 00:15:09.476 fused_ordering(347) 00:15:09.476 fused_ordering(348) 00:15:09.476 fused_ordering(349) 00:15:09.476 fused_ordering(350) 00:15:09.476 fused_ordering(351) 00:15:09.476 fused_ordering(352) 00:15:09.476 fused_ordering(353) 00:15:09.476 fused_ordering(354) 00:15:09.476 fused_ordering(355) 00:15:09.476 fused_ordering(356) 00:15:09.476 fused_ordering(357) 00:15:09.476 fused_ordering(358) 00:15:09.476 fused_ordering(359) 00:15:09.476 fused_ordering(360) 00:15:09.476 fused_ordering(361) 00:15:09.477 fused_ordering(362) 00:15:09.477 fused_ordering(363) 00:15:09.477 fused_ordering(364) 00:15:09.477 fused_ordering(365) 00:15:09.477 fused_ordering(366) 00:15:09.477 fused_ordering(367) 00:15:09.477 fused_ordering(368) 00:15:09.477 fused_ordering(369) 00:15:09.477 fused_ordering(370) 00:15:09.477 fused_ordering(371) 00:15:09.477 fused_ordering(372) 00:15:09.477 fused_ordering(373) 00:15:09.477 fused_ordering(374) 00:15:09.477 fused_ordering(375) 00:15:09.477 fused_ordering(376) 00:15:09.477 fused_ordering(377) 00:15:09.477 fused_ordering(378) 00:15:09.477 fused_ordering(379) 00:15:09.477 fused_ordering(380) 00:15:09.477 fused_ordering(381) 00:15:09.477 fused_ordering(382) 00:15:09.477 fused_ordering(383) 00:15:09.477 fused_ordering(384) 00:15:09.477 fused_ordering(385) 00:15:09.477 fused_ordering(386) 00:15:09.477 fused_ordering(387) 00:15:09.477 fused_ordering(388) 00:15:09.477 fused_ordering(389) 00:15:09.477 fused_ordering(390) 00:15:09.477 fused_ordering(391) 00:15:09.477 fused_ordering(392) 00:15:09.477 fused_ordering(393) 00:15:09.477 fused_ordering(394) 00:15:09.477 fused_ordering(395) 00:15:09.477 fused_ordering(396) 00:15:09.477 fused_ordering(397) 00:15:09.477 fused_ordering(398) 00:15:09.477 fused_ordering(399) 00:15:09.477 fused_ordering(400) 00:15:09.477 fused_ordering(401) 00:15:09.477 fused_ordering(402) 00:15:09.477 fused_ordering(403) 00:15:09.477 fused_ordering(404) 00:15:09.477 fused_ordering(405) 00:15:09.477 fused_ordering(406) 00:15:09.477 fused_ordering(407) 00:15:09.477 fused_ordering(408) 00:15:09.477 fused_ordering(409) 00:15:09.477 fused_ordering(410) 00:15:09.736 fused_ordering(411) 00:15:09.736 fused_ordering(412) 00:15:09.736 fused_ordering(413) 00:15:09.736 fused_ordering(414) 00:15:09.736 fused_ordering(415) 00:15:09.736 fused_ordering(416) 00:15:09.736 fused_ordering(417) 00:15:09.736 fused_ordering(418) 00:15:09.736 fused_ordering(419) 00:15:09.736 fused_ordering(420) 00:15:09.736 fused_ordering(421) 00:15:09.736 fused_ordering(422) 00:15:09.736 fused_ordering(423) 00:15:09.736 fused_ordering(424) 00:15:09.736 fused_ordering(425) 00:15:09.736 fused_ordering(426) 00:15:09.736 fused_ordering(427) 00:15:09.736 fused_ordering(428) 00:15:09.736 fused_ordering(429) 00:15:09.736 fused_ordering(430) 00:15:09.736 fused_ordering(431) 00:15:09.736 fused_ordering(432) 00:15:09.736 fused_ordering(433) 00:15:09.736 fused_ordering(434) 00:15:09.736 fused_ordering(435) 00:15:09.736 fused_ordering(436) 00:15:09.736 fused_ordering(437) 00:15:09.736 fused_ordering(438) 00:15:09.736 fused_ordering(439) 00:15:09.736 fused_ordering(440) 00:15:09.736 fused_ordering(441) 00:15:09.736 fused_ordering(442) 00:15:09.736 fused_ordering(443) 00:15:09.736 fused_ordering(444) 00:15:09.736 fused_ordering(445) 00:15:09.736 fused_ordering(446) 00:15:09.736 fused_ordering(447) 00:15:09.736 fused_ordering(448) 00:15:09.736 fused_ordering(449) 00:15:09.736 fused_ordering(450) 00:15:09.736 fused_ordering(451) 00:15:09.736 fused_ordering(452) 00:15:09.736 fused_ordering(453) 00:15:09.736 fused_ordering(454) 00:15:09.736 fused_ordering(455) 00:15:09.736 fused_ordering(456) 00:15:09.736 fused_ordering(457) 00:15:09.736 fused_ordering(458) 00:15:09.736 fused_ordering(459) 00:15:09.736 fused_ordering(460) 00:15:09.736 fused_ordering(461) 00:15:09.736 fused_ordering(462) 00:15:09.736 fused_ordering(463) 00:15:09.736 fused_ordering(464) 00:15:09.736 fused_ordering(465) 00:15:09.736 fused_ordering(466) 00:15:09.736 fused_ordering(467) 00:15:09.736 fused_ordering(468) 00:15:09.736 fused_ordering(469) 00:15:09.736 fused_ordering(470) 00:15:09.736 fused_ordering(471) 00:15:09.736 fused_ordering(472) 00:15:09.736 fused_ordering(473) 00:15:09.736 fused_ordering(474) 00:15:09.736 fused_ordering(475) 00:15:09.736 fused_ordering(476) 00:15:09.736 fused_ordering(477) 00:15:09.736 fused_ordering(478) 00:15:09.736 fused_ordering(479) 00:15:09.736 fused_ordering(480) 00:15:09.736 fused_ordering(481) 00:15:09.736 fused_ordering(482) 00:15:09.736 fused_ordering(483) 00:15:09.736 fused_ordering(484) 00:15:09.736 fused_ordering(485) 00:15:09.736 fused_ordering(486) 00:15:09.736 fused_ordering(487) 00:15:09.736 fused_ordering(488) 00:15:09.736 fused_ordering(489) 00:15:09.736 fused_ordering(490) 00:15:09.736 fused_ordering(491) 00:15:09.736 fused_ordering(492) 00:15:09.736 fused_ordering(493) 00:15:09.736 fused_ordering(494) 00:15:09.736 fused_ordering(495) 00:15:09.736 fused_ordering(496) 00:15:09.736 fused_ordering(497) 00:15:09.736 fused_ordering(498) 00:15:09.736 fused_ordering(499) 00:15:09.736 fused_ordering(500) 00:15:09.736 fused_ordering(501) 00:15:09.736 fused_ordering(502) 00:15:09.736 fused_ordering(503) 00:15:09.736 fused_ordering(504) 00:15:09.736 fused_ordering(505) 00:15:09.736 fused_ordering(506) 00:15:09.736 fused_ordering(507) 00:15:09.736 fused_ordering(508) 00:15:09.736 fused_ordering(509) 00:15:09.736 fused_ordering(510) 00:15:09.736 fused_ordering(511) 00:15:09.736 fused_ordering(512) 00:15:09.736 fused_ordering(513) 00:15:09.736 fused_ordering(514) 00:15:09.736 fused_ordering(515) 00:15:09.736 fused_ordering(516) 00:15:09.736 fused_ordering(517) 00:15:09.736 fused_ordering(518) 00:15:09.736 fused_ordering(519) 00:15:09.736 fused_ordering(520) 00:15:09.736 fused_ordering(521) 00:15:09.736 fused_ordering(522) 00:15:09.736 fused_ordering(523) 00:15:09.736 fused_ordering(524) 00:15:09.736 fused_ordering(525) 00:15:09.736 fused_ordering(526) 00:15:09.736 fused_ordering(527) 00:15:09.736 fused_ordering(528) 00:15:09.736 fused_ordering(529) 00:15:09.736 fused_ordering(530) 00:15:09.736 fused_ordering(531) 00:15:09.736 fused_ordering(532) 00:15:09.736 fused_ordering(533) 00:15:09.736 fused_ordering(534) 00:15:09.736 fused_ordering(535) 00:15:09.736 fused_ordering(536) 00:15:09.736 fused_ordering(537) 00:15:09.736 fused_ordering(538) 00:15:09.736 fused_ordering(539) 00:15:09.736 fused_ordering(540) 00:15:09.736 fused_ordering(541) 00:15:09.736 fused_ordering(542) 00:15:09.736 fused_ordering(543) 00:15:09.736 fused_ordering(544) 00:15:09.736 fused_ordering(545) 00:15:09.736 fused_ordering(546) 00:15:09.736 fused_ordering(547) 00:15:09.736 fused_ordering(548) 00:15:09.736 fused_ordering(549) 00:15:09.736 fused_ordering(550) 00:15:09.736 fused_ordering(551) 00:15:09.736 fused_ordering(552) 00:15:09.736 fused_ordering(553) 00:15:09.736 fused_ordering(554) 00:15:09.736 fused_ordering(555) 00:15:09.736 fused_ordering(556) 00:15:09.736 fused_ordering(557) 00:15:09.736 fused_ordering(558) 00:15:09.736 fused_ordering(559) 00:15:09.736 fused_ordering(560) 00:15:09.736 fused_ordering(561) 00:15:09.736 fused_ordering(562) 00:15:09.736 fused_ordering(563) 00:15:09.736 fused_ordering(564) 00:15:09.736 fused_ordering(565) 00:15:09.736 fused_ordering(566) 00:15:09.736 fused_ordering(567) 00:15:09.736 fused_ordering(568) 00:15:09.736 fused_ordering(569) 00:15:09.736 fused_ordering(570) 00:15:09.736 fused_ordering(571) 00:15:09.736 fused_ordering(572) 00:15:09.736 fused_ordering(573) 00:15:09.736 fused_ordering(574) 00:15:09.736 fused_ordering(575) 00:15:09.736 fused_ordering(576) 00:15:09.736 fused_ordering(577) 00:15:09.736 fused_ordering(578) 00:15:09.736 fused_ordering(579) 00:15:09.736 fused_ordering(580) 00:15:09.736 fused_ordering(581) 00:15:09.736 fused_ordering(582) 00:15:09.736 fused_ordering(583) 00:15:09.736 fused_ordering(584) 00:15:09.736 fused_ordering(585) 00:15:09.736 fused_ordering(586) 00:15:09.736 fused_ordering(587) 00:15:09.736 fused_ordering(588) 00:15:09.736 fused_ordering(589) 00:15:09.736 fused_ordering(590) 00:15:09.736 fused_ordering(591) 00:15:09.736 fused_ordering(592) 00:15:09.736 fused_ordering(593) 00:15:09.736 fused_ordering(594) 00:15:09.736 fused_ordering(595) 00:15:09.736 fused_ordering(596) 00:15:09.736 fused_ordering(597) 00:15:09.736 fused_ordering(598) 00:15:09.736 fused_ordering(599) 00:15:09.736 fused_ordering(600) 00:15:09.736 fused_ordering(601) 00:15:09.736 fused_ordering(602) 00:15:09.736 fused_ordering(603) 00:15:09.736 fused_ordering(604) 00:15:09.736 fused_ordering(605) 00:15:09.736 fused_ordering(606) 00:15:09.736 fused_ordering(607) 00:15:09.736 fused_ordering(608) 00:15:09.736 fused_ordering(609) 00:15:09.736 fused_ordering(610) 00:15:09.736 fused_ordering(611) 00:15:09.736 fused_ordering(612) 00:15:09.736 fused_ordering(613) 00:15:09.736 fused_ordering(614) 00:15:09.736 fused_ordering(615) 00:15:10.306 fused_ordering(616) 00:15:10.306 fused_ordering(617) 00:15:10.306 fused_ordering(618) 00:15:10.306 fused_ordering(619) 00:15:10.306 fused_ordering(620) 00:15:10.306 fused_ordering(621) 00:15:10.306 fused_ordering(622) 00:15:10.306 fused_ordering(623) 00:15:10.306 fused_ordering(624) 00:15:10.306 fused_ordering(625) 00:15:10.306 fused_ordering(626) 00:15:10.306 fused_ordering(627) 00:15:10.306 fused_ordering(628) 00:15:10.306 fused_ordering(629) 00:15:10.306 fused_ordering(630) 00:15:10.306 fused_ordering(631) 00:15:10.306 fused_ordering(632) 00:15:10.306 fused_ordering(633) 00:15:10.306 fused_ordering(634) 00:15:10.306 fused_ordering(635) 00:15:10.306 fused_ordering(636) 00:15:10.306 fused_ordering(637) 00:15:10.306 fused_ordering(638) 00:15:10.306 fused_ordering(639) 00:15:10.306 fused_ordering(640) 00:15:10.306 fused_ordering(641) 00:15:10.306 fused_ordering(642) 00:15:10.306 fused_ordering(643) 00:15:10.306 fused_ordering(644) 00:15:10.306 fused_ordering(645) 00:15:10.306 fused_ordering(646) 00:15:10.306 fused_ordering(647) 00:15:10.306 fused_ordering(648) 00:15:10.306 fused_ordering(649) 00:15:10.306 fused_ordering(650) 00:15:10.306 fused_ordering(651) 00:15:10.306 fused_ordering(652) 00:15:10.306 fused_ordering(653) 00:15:10.306 fused_ordering(654) 00:15:10.306 fused_ordering(655) 00:15:10.306 fused_ordering(656) 00:15:10.306 fused_ordering(657) 00:15:10.306 fused_ordering(658) 00:15:10.306 fused_ordering(659) 00:15:10.306 fused_ordering(660) 00:15:10.306 fused_ordering(661) 00:15:10.306 fused_ordering(662) 00:15:10.306 fused_ordering(663) 00:15:10.306 fused_ordering(664) 00:15:10.306 fused_ordering(665) 00:15:10.306 fused_ordering(666) 00:15:10.306 fused_ordering(667) 00:15:10.306 fused_ordering(668) 00:15:10.306 fused_ordering(669) 00:15:10.306 fused_ordering(670) 00:15:10.306 fused_ordering(671) 00:15:10.306 fused_ordering(672) 00:15:10.306 fused_ordering(673) 00:15:10.306 fused_ordering(674) 00:15:10.306 fused_ordering(675) 00:15:10.306 fused_ordering(676) 00:15:10.306 fused_ordering(677) 00:15:10.306 fused_ordering(678) 00:15:10.306 fused_ordering(679) 00:15:10.306 fused_ordering(680) 00:15:10.306 fused_ordering(681) 00:15:10.306 fused_ordering(682) 00:15:10.306 fused_ordering(683) 00:15:10.306 fused_ordering(684) 00:15:10.306 fused_ordering(685) 00:15:10.306 fused_ordering(686) 00:15:10.306 fused_ordering(687) 00:15:10.306 fused_ordering(688) 00:15:10.306 fused_ordering(689) 00:15:10.306 fused_ordering(690) 00:15:10.306 fused_ordering(691) 00:15:10.306 fused_ordering(692) 00:15:10.306 fused_ordering(693) 00:15:10.306 fused_ordering(694) 00:15:10.306 fused_ordering(695) 00:15:10.306 fused_ordering(696) 00:15:10.306 fused_ordering(697) 00:15:10.306 fused_ordering(698) 00:15:10.306 fused_ordering(699) 00:15:10.306 fused_ordering(700) 00:15:10.306 fused_ordering(701) 00:15:10.306 fused_ordering(702) 00:15:10.306 fused_ordering(703) 00:15:10.306 fused_ordering(704) 00:15:10.306 fused_ordering(705) 00:15:10.306 fused_ordering(706) 00:15:10.306 fused_ordering(707) 00:15:10.306 fused_ordering(708) 00:15:10.306 fused_ordering(709) 00:15:10.306 fused_ordering(710) 00:15:10.306 fused_ordering(711) 00:15:10.306 fused_ordering(712) 00:15:10.306 fused_ordering(713) 00:15:10.306 fused_ordering(714) 00:15:10.306 fused_ordering(715) 00:15:10.306 fused_ordering(716) 00:15:10.306 fused_ordering(717) 00:15:10.306 fused_ordering(718) 00:15:10.306 fused_ordering(719) 00:15:10.306 fused_ordering(720) 00:15:10.306 fused_ordering(721) 00:15:10.306 fused_ordering(722) 00:15:10.306 fused_ordering(723) 00:15:10.306 fused_ordering(724) 00:15:10.306 fused_ordering(725) 00:15:10.306 fused_ordering(726) 00:15:10.306 fused_ordering(727) 00:15:10.306 fused_ordering(728) 00:15:10.306 fused_ordering(729) 00:15:10.306 fused_ordering(730) 00:15:10.306 fused_ordering(731) 00:15:10.306 fused_ordering(732) 00:15:10.306 fused_ordering(733) 00:15:10.306 fused_ordering(734) 00:15:10.306 fused_ordering(735) 00:15:10.306 fused_ordering(736) 00:15:10.306 fused_ordering(737) 00:15:10.306 fused_ordering(738) 00:15:10.307 fused_ordering(739) 00:15:10.307 fused_ordering(740) 00:15:10.307 fused_ordering(741) 00:15:10.307 fused_ordering(742) 00:15:10.307 fused_ordering(743) 00:15:10.307 fused_ordering(744) 00:15:10.307 fused_ordering(745) 00:15:10.307 fused_ordering(746) 00:15:10.307 fused_ordering(747) 00:15:10.307 fused_ordering(748) 00:15:10.307 fused_ordering(749) 00:15:10.307 fused_ordering(750) 00:15:10.307 fused_ordering(751) 00:15:10.307 fused_ordering(752) 00:15:10.307 fused_ordering(753) 00:15:10.307 fused_ordering(754) 00:15:10.307 fused_ordering(755) 00:15:10.307 fused_ordering(756) 00:15:10.307 fused_ordering(757) 00:15:10.307 fused_ordering(758) 00:15:10.307 fused_ordering(759) 00:15:10.307 fused_ordering(760) 00:15:10.307 fused_ordering(761) 00:15:10.307 fused_ordering(762) 00:15:10.307 fused_ordering(763) 00:15:10.307 fused_ordering(764) 00:15:10.307 fused_ordering(765) 00:15:10.307 fused_ordering(766) 00:15:10.307 fused_ordering(767) 00:15:10.307 fused_ordering(768) 00:15:10.307 fused_ordering(769) 00:15:10.307 fused_ordering(770) 00:15:10.307 fused_ordering(771) 00:15:10.307 fused_ordering(772) 00:15:10.307 fused_ordering(773) 00:15:10.307 fused_ordering(774) 00:15:10.307 fused_ordering(775) 00:15:10.307 fused_ordering(776) 00:15:10.307 fused_ordering(777) 00:15:10.307 fused_ordering(778) 00:15:10.307 fused_ordering(779) 00:15:10.307 fused_ordering(780) 00:15:10.307 fused_ordering(781) 00:15:10.307 fused_ordering(782) 00:15:10.307 fused_ordering(783) 00:15:10.307 fused_ordering(784) 00:15:10.307 fused_ordering(785) 00:15:10.307 fused_ordering(786) 00:15:10.307 fused_ordering(787) 00:15:10.307 fused_ordering(788) 00:15:10.307 fused_ordering(789) 00:15:10.307 fused_ordering(790) 00:15:10.307 fused_ordering(791) 00:15:10.307 fused_ordering(792) 00:15:10.307 fused_ordering(793) 00:15:10.307 fused_ordering(794) 00:15:10.307 fused_ordering(795) 00:15:10.307 fused_ordering(796) 00:15:10.307 fused_ordering(797) 00:15:10.307 fused_ordering(798) 00:15:10.307 fused_ordering(799) 00:15:10.307 fused_ordering(800) 00:15:10.307 fused_ordering(801) 00:15:10.307 fused_ordering(802) 00:15:10.307 fused_ordering(803) 00:15:10.307 fused_ordering(804) 00:15:10.307 fused_ordering(805) 00:15:10.307 fused_ordering(806) 00:15:10.307 fused_ordering(807) 00:15:10.307 fused_ordering(808) 00:15:10.307 fused_ordering(809) 00:15:10.307 fused_ordering(810) 00:15:10.307 fused_ordering(811) 00:15:10.307 fused_ordering(812) 00:15:10.307 fused_ordering(813) 00:15:10.307 fused_ordering(814) 00:15:10.307 fused_ordering(815) 00:15:10.307 fused_ordering(816) 00:15:10.307 fused_ordering(817) 00:15:10.307 fused_ordering(818) 00:15:10.307 fused_ordering(819) 00:15:10.307 fused_ordering(820) 00:15:10.878 fused_ordering(821) 00:15:10.878 fused_ordering(822) 00:15:10.878 fused_ordering(823) 00:15:10.878 fused_ordering(824) 00:15:10.878 fused_ordering(825) 00:15:10.878 fused_ordering(826) 00:15:10.878 fused_ordering(827) 00:15:10.878 fused_ordering(828) 00:15:10.878 fused_ordering(829) 00:15:10.878 fused_ordering(830) 00:15:10.878 fused_ordering(831) 00:15:10.878 fused_ordering(832) 00:15:10.878 fused_ordering(833) 00:15:10.878 fused_ordering(834) 00:15:10.878 fused_ordering(835) 00:15:10.878 fused_ordering(836) 00:15:10.878 fused_ordering(837) 00:15:10.878 fused_ordering(838) 00:15:10.878 fused_ordering(839) 00:15:10.878 fused_ordering(840) 00:15:10.878 fused_ordering(841) 00:15:10.878 fused_ordering(842) 00:15:10.878 fused_ordering(843) 00:15:10.878 fused_ordering(844) 00:15:10.878 fused_ordering(845) 00:15:10.878 fused_ordering(846) 00:15:10.878 fused_ordering(847) 00:15:10.878 fused_ordering(848) 00:15:10.878 fused_ordering(849) 00:15:10.878 fused_ordering(850) 00:15:10.878 fused_ordering(851) 00:15:10.878 fused_ordering(852) 00:15:10.878 fused_ordering(853) 00:15:10.878 fused_ordering(854) 00:15:10.878 fused_ordering(855) 00:15:10.878 fused_ordering(856) 00:15:10.878 fused_ordering(857) 00:15:10.878 fused_ordering(858) 00:15:10.878 fused_ordering(859) 00:15:10.878 fused_ordering(860) 00:15:10.878 fused_ordering(861) 00:15:10.878 fused_ordering(862) 00:15:10.878 fused_ordering(863) 00:15:10.878 fused_ordering(864) 00:15:10.878 fused_ordering(865) 00:15:10.878 fused_ordering(866) 00:15:10.878 fused_ordering(867) 00:15:10.878 fused_ordering(868) 00:15:10.878 fused_ordering(869) 00:15:10.878 fused_ordering(870) 00:15:10.878 fused_ordering(871) 00:15:10.878 fused_ordering(872) 00:15:10.878 fused_ordering(873) 00:15:10.878 fused_ordering(874) 00:15:10.878 fused_ordering(875) 00:15:10.878 fused_ordering(876) 00:15:10.878 fused_ordering(877) 00:15:10.878 fused_ordering(878) 00:15:10.878 fused_ordering(879) 00:15:10.878 fused_ordering(880) 00:15:10.878 fused_ordering(881) 00:15:10.878 fused_ordering(882) 00:15:10.878 fused_ordering(883) 00:15:10.878 fused_ordering(884) 00:15:10.878 fused_ordering(885) 00:15:10.878 fused_ordering(886) 00:15:10.878 fused_ordering(887) 00:15:10.878 fused_ordering(888) 00:15:10.878 fused_ordering(889) 00:15:10.878 fused_ordering(890) 00:15:10.878 fused_ordering(891) 00:15:10.878 fused_ordering(892) 00:15:10.878 fused_ordering(893) 00:15:10.878 fused_ordering(894) 00:15:10.878 fused_ordering(895) 00:15:10.878 fused_ordering(896) 00:15:10.878 fused_ordering(897) 00:15:10.878 fused_ordering(898) 00:15:10.878 fused_ordering(899) 00:15:10.878 fused_ordering(900) 00:15:10.878 fused_ordering(901) 00:15:10.878 fused_ordering(902) 00:15:10.878 fused_ordering(903) 00:15:10.878 fused_ordering(904) 00:15:10.878 fused_ordering(905) 00:15:10.878 fused_ordering(906) 00:15:10.878 fused_ordering(907) 00:15:10.878 fused_ordering(908) 00:15:10.878 fused_ordering(909) 00:15:10.878 fused_ordering(910) 00:15:10.878 fused_ordering(911) 00:15:10.878 fused_ordering(912) 00:15:10.878 fused_ordering(913) 00:15:10.878 fused_ordering(914) 00:15:10.878 fused_ordering(915) 00:15:10.878 fused_ordering(916) 00:15:10.878 fused_ordering(917) 00:15:10.878 fused_ordering(918) 00:15:10.878 fused_ordering(919) 00:15:10.878 fused_ordering(920) 00:15:10.878 fused_ordering(921) 00:15:10.878 fused_ordering(922) 00:15:10.878 fused_ordering(923) 00:15:10.878 fused_ordering(924) 00:15:10.878 fused_ordering(925) 00:15:10.878 fused_ordering(926) 00:15:10.878 fused_ordering(927) 00:15:10.878 fused_ordering(928) 00:15:10.878 fused_ordering(929) 00:15:10.878 fused_ordering(930) 00:15:10.878 fused_ordering(931) 00:15:10.878 fused_ordering(932) 00:15:10.878 fused_ordering(933) 00:15:10.878 fused_ordering(934) 00:15:10.878 fused_ordering(935) 00:15:10.878 fused_ordering(936) 00:15:10.878 fused_ordering(937) 00:15:10.878 fused_ordering(938) 00:15:10.878 fused_ordering(939) 00:15:10.878 fused_ordering(940) 00:15:10.878 fused_ordering(941) 00:15:10.878 fused_ordering(942) 00:15:10.878 fused_ordering(943) 00:15:10.878 fused_ordering(944) 00:15:10.878 fused_ordering(945) 00:15:10.878 fused_ordering(946) 00:15:10.878 fused_ordering(947) 00:15:10.878 fused_ordering(948) 00:15:10.878 fused_ordering(949) 00:15:10.878 fused_ordering(950) 00:15:10.878 fused_ordering(951) 00:15:10.878 fused_ordering(952) 00:15:10.878 fused_ordering(953) 00:15:10.878 fused_ordering(954) 00:15:10.878 fused_ordering(955) 00:15:10.878 fused_ordering(956) 00:15:10.878 fused_ordering(957) 00:15:10.878 fused_ordering(958) 00:15:10.878 fused_ordering(959) 00:15:10.878 fused_ordering(960) 00:15:10.878 fused_ordering(961) 00:15:10.878 fused_ordering(962) 00:15:10.878 fused_ordering(963) 00:15:10.878 fused_ordering(964) 00:15:10.878 fused_ordering(965) 00:15:10.878 fused_ordering(966) 00:15:10.878 fused_ordering(967) 00:15:10.878 fused_ordering(968) 00:15:10.879 fused_ordering(969) 00:15:10.879 fused_ordering(970) 00:15:10.879 fused_ordering(971) 00:15:10.879 fused_ordering(972) 00:15:10.879 fused_ordering(973) 00:15:10.879 fused_ordering(974) 00:15:10.879 fused_ordering(975) 00:15:10.879 fused_ordering(976) 00:15:10.879 fused_ordering(977) 00:15:10.879 fused_ordering(978) 00:15:10.879 fused_ordering(979) 00:15:10.879 fused_ordering(980) 00:15:10.879 fused_ordering(981) 00:15:10.879 fused_ordering(982) 00:15:10.879 fused_ordering(983) 00:15:10.879 fused_ordering(984) 00:15:10.879 fused_ordering(985) 00:15:10.879 fused_ordering(986) 00:15:10.879 fused_ordering(987) 00:15:10.879 fused_ordering(988) 00:15:10.879 fused_ordering(989) 00:15:10.879 fused_ordering(990) 00:15:10.879 fused_ordering(991) 00:15:10.879 fused_ordering(992) 00:15:10.879 fused_ordering(993) 00:15:10.879 fused_ordering(994) 00:15:10.879 fused_ordering(995) 00:15:10.879 fused_ordering(996) 00:15:10.879 fused_ordering(997) 00:15:10.879 fused_ordering(998) 00:15:10.879 fused_ordering(999) 00:15:10.879 fused_ordering(1000) 00:15:10.879 fused_ordering(1001) 00:15:10.879 fused_ordering(1002) 00:15:10.879 fused_ordering(1003) 00:15:10.879 fused_ordering(1004) 00:15:10.879 fused_ordering(1005) 00:15:10.879 fused_ordering(1006) 00:15:10.879 fused_ordering(1007) 00:15:10.879 fused_ordering(1008) 00:15:10.879 fused_ordering(1009) 00:15:10.879 fused_ordering(1010) 00:15:10.879 fused_ordering(1011) 00:15:10.879 fused_ordering(1012) 00:15:10.879 fused_ordering(1013) 00:15:10.879 fused_ordering(1014) 00:15:10.879 fused_ordering(1015) 00:15:10.879 fused_ordering(1016) 00:15:10.879 fused_ordering(1017) 00:15:10.879 fused_ordering(1018) 00:15:10.879 fused_ordering(1019) 00:15:10.879 fused_ordering(1020) 00:15:10.879 fused_ordering(1021) 00:15:10.879 fused_ordering(1022) 00:15:10.879 fused_ordering(1023) 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.879 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.879 rmmod nvme_tcp 00:15:10.879 rmmod nvme_fabrics 00:15:10.879 rmmod nvme_keyring 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 440686 ']' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 440686 ']' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 440686' 00:15:11.139 killing process with pid 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 440686 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.139 14:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.684 14:17:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.684 00:15:13.684 real 0m13.743s 00:15:13.684 user 0m7.120s 00:15:13.684 sys 0m7.237s 00:15:13.684 14:17:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:13.684 14:17:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:13.684 ************************************ 00:15:13.684 END TEST nvmf_fused_ordering 00:15:13.684 ************************************ 00:15:13.684 14:17:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:13.684 14:17:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:13.684 14:17:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:13.684 14:17:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.684 ************************************ 00:15:13.684 START TEST nvmf_delete_subsystem 00:15:13.684 ************************************ 00:15:13.684 14:17:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:13.684 * Looking for test storage... 00:15:13.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.684 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.685 14:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.817 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:21.818 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:21.818 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:21.818 Found net devices under 0000:31:00.0: cvl_0_0 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:21.818 Found net devices under 0000:31:00.1: cvl_0_1 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.818 14:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:15:21.818 00:15:21.818 --- 10.0.0.2 ping statistics --- 00:15:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.818 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:15:21.818 00:15:21.818 --- 10.0.0.1 ping statistics --- 00:15:21.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.818 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=445996 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 445996 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 445996 ']' 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:21.818 14:17:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:21.818 [2024-06-07 14:17:45.233917] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:15:21.818 [2024-06-07 14:17:45.233982] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.818 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.818 [2024-06-07 14:17:45.315276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:21.818 [2024-06-07 14:17:45.354333] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.818 [2024-06-07 14:17:45.354377] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.818 [2024-06-07 14:17:45.354386] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.818 [2024-06-07 14:17:45.354392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.818 [2024-06-07 14:17:45.354398] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.818 [2024-06-07 14:17:45.358216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.818 [2024-06-07 14:17:45.358238] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.408 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 [2024-06-07 14:17:46.049812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 [2024-06-07 14:17:46.073986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 NULL1 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 Delay0 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=446124 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:22.685 14:17:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:22.685 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.685 [2024-06-07 14:17:46.160671] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:24.594 14:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.594 14:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.594 14:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 starting I/O failed: -6 00:15:24.854 [2024-06-07 14:17:48.243679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69a70 is same with the state(5) to be set 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Write completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.854 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Read completed with error (sct=0, sc=8) 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 starting I/O failed: -6 00:15:24.855 Write completed with error (sct=0, sc=8) 00:15:24.855 [2024-06-07 14:17:48.249222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f125c000c00 is same with the state(5) to be set 00:15:25.794 [2024-06-07 14:17:49.217407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b67c80 is same with the state(5) to be set 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 [2024-06-07 14:17:49.246841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b69eb0 is same with the state(5) to be set 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 [2024-06-07 14:17:49.247161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6a4d0 is same with the state(5) to be set 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 [2024-06-07 14:17:49.250991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f125c00c780 is same with the state(5) to be set 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Write completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 Read completed with error (sct=0, sc=8) 00:15:25.794 [2024-06-07 14:17:49.251113] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f125c00bfe0 is same with the state(5) to be set 00:15:25.794 Initializing NVMe Controllers 00:15:25.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.795 Controller IO queue size 128, less than required. 00:15:25.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:25.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:25.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:25.795 Initialization complete. Launching workers. 00:15:25.795 ======================================================== 00:15:25.795 Latency(us) 00:15:25.795 Device Information : IOPS MiB/s Average min max 00:15:25.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.91 0.08 894212.36 235.48 1005465.08 00:15:25.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.88 0.09 951028.23 390.34 2002091.07 00:15:25.795 ======================================================== 00:15:25.795 Total : 348.80 0.17 923350.78 235.48 2002091.07 00:15:25.795 00:15:25.795 [2024-06-07 14:17:49.251750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b67c80 (9): Bad file descriptor 00:15:25.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:25.795 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.795 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:25.795 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 446124 00:15:25.795 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 446124 00:15:26.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (446124) - No such process 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 446124 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 446124 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 446124 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:15:26.365 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.366 [2024-06-07 14:17:49.782208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=446795 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:26.366 14:17:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:26.366 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.366 [2024-06-07 14:17:49.841873] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:26.936 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:26.936 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:26.936 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:27.195 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:27.195 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:27.195 14:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:27.766 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:27.766 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:27.766 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:28.336 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:28.336 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:28.336 14:17:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:28.906 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:28.906 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:28.906 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.476 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:29.476 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:29.476 14:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:29.476 Initializing NVMe Controllers 00:15:29.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.476 Controller IO queue size 128, less than required. 00:15:29.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:29.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:29.476 Initialization complete. Launching workers. 00:15:29.476 ======================================================== 00:15:29.476 Latency(us) 00:15:29.476 Device Information : IOPS MiB/s Average min max 00:15:29.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001980.23 1000195.49 1006207.45 00:15:29.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002811.52 1000149.62 1009066.89 00:15:29.476 ======================================================== 00:15:29.476 Total : 256.00 0.12 1002395.88 1000149.62 1009066.89 00:15:29.476 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 446795 00:15:29.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (446795) - No such process 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 446795 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.736 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.736 rmmod nvme_tcp 00:15:29.736 rmmod nvme_fabrics 00:15:29.736 rmmod nvme_keyring 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 445996 ']' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 445996 ']' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 445996' 00:15:29.996 killing process with pid 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 445996 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.996 14:17:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.539 14:17:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.539 00:15:32.539 real 0m18.743s 00:15:32.539 user 0m30.802s 00:15:32.539 sys 0m6.853s 00:15:32.539 14:17:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:32.539 14:17:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:32.539 ************************************ 00:15:32.539 END TEST nvmf_delete_subsystem 00:15:32.539 ************************************ 00:15:32.539 14:17:55 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:32.539 14:17:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:32.539 14:17:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:32.539 14:17:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.539 ************************************ 00:15:32.539 START TEST nvmf_ns_masking 00:15:32.539 ************************************ 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:32.539 * Looking for test storage... 00:15:32.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.539 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=cc153792-6e70-4d44-a2fa-18337f31c1b5 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.540 14:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:40.676 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:40.677 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:40.677 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:40.677 Found net devices under 0000:31:00.0: cvl_0_0 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:40.677 Found net devices under 0000:31:00.1: cvl_0_1 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:40.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:15:40.677 00:15:40.677 --- 10.0.0.2 ping statistics --- 00:15:40.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.677 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:15:40.677 00:15:40.677 --- 10.0.0.1 ping statistics --- 00:15:40.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.677 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=452281 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 452281 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 452281 ']' 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.677 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:40.678 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.678 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:40.678 14:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:40.678 14:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.678 [2024-06-07 14:18:03.982668] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:15:40.678 [2024-06-07 14:18:03.982734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.678 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.678 [2024-06-07 14:18:04.064169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.678 [2024-06-07 14:18:04.105950] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.678 [2024-06-07 14:18:04.105996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.678 [2024-06-07 14:18:04.106004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.678 [2024-06-07 14:18:04.106011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.678 [2024-06-07 14:18:04.106016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.678 [2024-06-07 14:18:04.106157] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.678 [2024-06-07 14:18:04.106289] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.678 [2024-06-07 14:18:04.106606] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.678 [2024-06-07 14:18:04.106607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.248 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:41.509 [2024-06-07 14:18:04.937285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.509 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:41.509 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:41.509 14:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:41.509 Malloc1 00:15:41.509 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:41.769 Malloc2 00:15:41.769 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:42.028 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:42.028 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.289 [2024-06-07 14:18:05.775590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.289 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:42.289 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cc153792-6e70-4d44-a2fa-18337f31c1b5 -a 10.0.0.2 -s 4420 -i 4 00:15:42.549 14:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.549 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:42.549 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.549 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:15:42.549 14:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:44.495 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:44.495 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:44.495 14:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.495 [ 0]:0x1 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.495 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d296046b57514788921de18bd25c42e0 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d296046b57514788921de18bd25c42e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:44.755 [ 0]:0x1 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d296046b57514788921de18bd25c42e0 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d296046b57514788921de18bd25c42e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:44.755 [ 1]:0x2 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.755 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:45.016 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:45.016 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.016 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:45.016 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.016 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.276 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:45.276 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:45.276 14:18:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cc153792-6e70-4d44-a2fa-18337f31c1b5 -a 10.0.0.2 -s 4420 -i 4 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:15:45.536 14:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:47.448 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:47.709 [ 0]:0x2 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.709 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:47.970 [ 0]:0x1 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d296046b57514788921de18bd25c42e0 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d296046b57514788921de18bd25c42e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:47.970 [ 1]:0x2 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.970 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:48.231 [ 0]:0x2 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.231 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.492 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:48.492 14:18:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cc153792-6e70-4d44-a2fa-18337f31c1b5 -a 10.0.0.2 -s 4420 -i 4 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:15:48.492 14:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:51.035 [ 0]:0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=d296046b57514788921de18bd25c42e0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ d296046b57514788921de18bd25c42e0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:51.035 [ 1]:0x2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:51.035 [ 0]:0x2 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.035 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:51.036 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:51.036 [2024-06-07 14:18:14.663072] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:51.036 request: 00:15:51.036 { 00:15:51.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.036 "nsid": 2, 00:15:51.036 "host": "nqn.2016-06.io.spdk:host1", 00:15:51.036 "method": "nvmf_ns_remove_host", 00:15:51.036 "req_id": 1 00:15:51.036 } 00:15:51.036 Got JSON-RPC error response 00:15:51.036 response: 00:15:51.036 { 00:15:51.036 "code": -32602, 00:15:51.036 "message": "Invalid parameters" 00:15:51.036 } 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.295 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:51.296 [ 0]:0x2 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31a9082b9afd49cbb7054cb821208042 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31a9082b9afd49cbb7054cb821208042 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:51.296 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.557 14:18:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.557 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.557 rmmod nvme_tcp 00:15:51.557 rmmod nvme_fabrics 00:15:51.557 rmmod nvme_keyring 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 452281 ']' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 452281 ']' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 452281' 00:15:51.817 killing process with pid 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 452281 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.817 14:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.359 14:18:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:54.359 00:15:54.359 real 0m21.759s 00:15:54.359 user 0m50.032s 00:15:54.359 sys 0m7.306s 00:15:54.359 14:18:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:54.359 14:18:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:54.359 ************************************ 00:15:54.359 END TEST nvmf_ns_masking 00:15:54.359 ************************************ 00:15:54.359 14:18:17 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:54.359 14:18:17 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:54.359 14:18:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:54.359 14:18:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:54.359 14:18:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.359 ************************************ 00:15:54.359 START TEST nvmf_nvme_cli 00:15:54.359 ************************************ 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:54.359 * Looking for test storage... 00:15:54.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.359 14:18:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.360 14:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:02.500 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:02.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:02.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:02.501 Found net devices under 0000:31:00.0: cvl_0_0 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:02.501 Found net devices under 0000:31:00.1: cvl_0_1 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:16:02.501 00:16:02.501 --- 10.0.0.2 ping statistics --- 00:16:02.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.501 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:16:02.501 00:16:02.501 --- 10.0.0.1 ping statistics --- 00:16:02.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.501 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=459880 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 459880 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 459880 ']' 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:02.501 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.502 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:02.502 14:18:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:02.502 [2024-06-07 14:18:25.896904] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:16:02.502 [2024-06-07 14:18:25.896966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.502 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.502 [2024-06-07 14:18:25.975340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.502 [2024-06-07 14:18:26.015691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.502 [2024-06-07 14:18:26.015733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.502 [2024-06-07 14:18:26.015741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.502 [2024-06-07 14:18:26.015748] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.502 [2024-06-07 14:18:26.015753] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.502 [2024-06-07 14:18:26.015895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.502 [2024-06-07 14:18:26.016018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.502 [2024-06-07 14:18:26.016177] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.502 [2024-06-07 14:18:26.016178] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.071 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:03.071 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:16:03.071 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.071 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:03.071 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 [2024-06-07 14:18:26.730912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 Malloc0 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 Malloc1 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 [2024-06-07 14:18:26.820525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:03.331 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:16:03.331 00:16:03.331 Discovery Log Number of Records 2, Generation counter 2 00:16:03.331 =====Discovery Log Entry 0====== 00:16:03.331 trtype: tcp 00:16:03.331 adrfam: ipv4 00:16:03.331 subtype: current discovery subsystem 00:16:03.331 treq: not required 00:16:03.331 portid: 0 00:16:03.331 trsvcid: 4420 00:16:03.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:03.331 traddr: 10.0.0.2 00:16:03.332 eflags: explicit discovery connections, duplicate discovery information 00:16:03.332 sectype: none 00:16:03.332 =====Discovery Log Entry 1====== 00:16:03.332 trtype: tcp 00:16:03.332 adrfam: ipv4 00:16:03.332 subtype: nvme subsystem 00:16:03.332 treq: not required 00:16:03.332 portid: 0 00:16:03.332 trsvcid: 4420 00:16:03.332 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:03.332 traddr: 10.0.0.2 00:16:03.332 eflags: none 00:16:03.332 sectype: none 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:03.332 14:18:26 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:16:05.242 14:18:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:07.189 /dev/nvme0n1 ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:07.189 14:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.450 14:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.451 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.451 rmmod nvme_tcp 00:16:07.451 rmmod nvme_fabrics 00:16:07.451 rmmod nvme_keyring 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 459880 ']' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 459880 ']' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 459880' 00:16:07.711 killing process with pid 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 459880 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.711 14:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.257 14:18:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:10.257 00:16:10.257 real 0m15.813s 00:16:10.257 user 0m23.401s 00:16:10.257 sys 0m6.622s 00:16:10.257 14:18:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:10.257 14:18:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.258 ************************************ 00:16:10.258 END TEST nvmf_nvme_cli 00:16:10.258 ************************************ 00:16:10.258 14:18:33 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:16:10.258 14:18:33 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:10.258 14:18:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:10.258 14:18:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:10.258 14:18:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.258 ************************************ 00:16:10.258 START TEST nvmf_vfio_user 00:16:10.258 ************************************ 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:10.258 * Looking for test storage... 00:16:10.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=461386 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 461386' 00:16:10.258 Process pid: 461386 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 461386 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 461386 ']' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:10.258 14:18:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:10.258 [2024-06-07 14:18:33.643473] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:16:10.258 [2024-06-07 14:18:33.643544] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.258 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.258 [2024-06-07 14:18:33.714734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:10.258 [2024-06-07 14:18:33.755080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.258 [2024-06-07 14:18:33.755125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.258 [2024-06-07 14:18:33.755133] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.258 [2024-06-07 14:18:33.755140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.258 [2024-06-07 14:18:33.755145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.258 [2024-06-07 14:18:33.755239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.258 [2024-06-07 14:18:33.755468] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.258 [2024-06-07 14:18:33.755469] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.258 [2024-06-07 14:18:33.755303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.830 14:18:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:10.830 14:18:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:16:10.830 14:18:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:12.216 Malloc1 00:16:12.216 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:12.476 14:18:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:12.736 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:12.736 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.736 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:12.736 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:12.996 Malloc2 00:16:12.997 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:13.257 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:13.257 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:13.519 14:18:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:13.519 [2024-06-07 14:18:36.991510] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:16:13.519 [2024-06-07 14:18:36.991554] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid462076 ] 00:16:13.519 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.519 [2024-06-07 14:18:37.022804] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:13.519 [2024-06-07 14:18:37.027489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:13.519 [2024-06-07 14:18:37.027509] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f64a2242000 00:16:13.519 [2024-06-07 14:18:37.028493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.029494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.030499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.031507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.032517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.033514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.034525] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.035523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:13.519 [2024-06-07 14:18:37.036536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:13.519 [2024-06-07 14:18:37.036546] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f64a1009000 00:16:13.519 [2024-06-07 14:18:37.037873] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:13.519 [2024-06-07 14:18:37.058355] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:13.519 [2024-06-07 14:18:37.058380] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:13.519 [2024-06-07 14:18:37.060665] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:13.519 [2024-06-07 14:18:37.060712] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:13.519 [2024-06-07 14:18:37.060794] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:13.519 [2024-06-07 14:18:37.060811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:13.519 [2024-06-07 14:18:37.060816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:13.519 [2024-06-07 14:18:37.065200] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:13.519 [2024-06-07 14:18:37.065210] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:13.519 [2024-06-07 14:18:37.065218] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:13.520 [2024-06-07 14:18:37.065688] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:13.520 [2024-06-07 14:18:37.065697] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:13.520 [2024-06-07 14:18:37.065704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.066693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:13.520 [2024-06-07 14:18:37.066701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.067693] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:13.520 [2024-06-07 14:18:37.067701] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:13.520 [2024-06-07 14:18:37.067706] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.067712] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.067818] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:13.520 [2024-06-07 14:18:37.067826] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.067831] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:13.520 [2024-06-07 14:18:37.068704] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:13.520 [2024-06-07 14:18:37.069709] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:13.520 [2024-06-07 14:18:37.070719] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:13.520 [2024-06-07 14:18:37.071709] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:13.520 [2024-06-07 14:18:37.071763] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:13.520 [2024-06-07 14:18:37.072722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:13.520 [2024-06-07 14:18:37.072729] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:13.520 [2024-06-07 14:18:37.072734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072755] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:13.520 [2024-06-07 14:18:37.072762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072779] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.520 [2024-06-07 14:18:37.072784] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.520 [2024-06-07 14:18:37.072797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.072836] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:13.520 [2024-06-07 14:18:37.072841] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:13.520 [2024-06-07 14:18:37.072845] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:13.520 [2024-06-07 14:18:37.072852] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:13.520 [2024-06-07 14:18:37.072857] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:13.520 [2024-06-07 14:18:37.072861] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:13.520 [2024-06-07 14:18:37.072866] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072873] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.072896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.072907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.520 [2024-06-07 14:18:37.072915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.520 [2024-06-07 14:18:37.072923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.520 [2024-06-07 14:18:37.072932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:13.520 [2024-06-07 14:18:37.072937] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072946] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.072966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.072971] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:13.520 [2024-06-07 14:18:37.072976] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072989] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.072998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.073005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.073054] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073062] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073069] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:13.520 [2024-06-07 14:18:37.073074] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:13.520 [2024-06-07 14:18:37.073080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.073089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.073097] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:13.520 [2024-06-07 14:18:37.073105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073112] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073119] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.520 [2024-06-07 14:18:37.073125] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.520 [2024-06-07 14:18:37.073132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.073150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.073161] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073169] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073175] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:13.520 [2024-06-07 14:18:37.073180] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.520 [2024-06-07 14:18:37.073186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.073199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:13.520 [2024-06-07 14:18:37.073206] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073236] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:13.520 [2024-06-07 14:18:37.073240] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:13.520 [2024-06-07 14:18:37.073245] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:13.520 [2024-06-07 14:18:37.073264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:13.520 [2024-06-07 14:18:37.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073343] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:13.521 [2024-06-07 14:18:37.073349] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:13.521 [2024-06-07 14:18:37.073352] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:13.521 [2024-06-07 14:18:37.073356] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:13.521 [2024-06-07 14:18:37.073362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:13.521 [2024-06-07 14:18:37.073370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:13.521 [2024-06-07 14:18:37.073374] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:13.521 [2024-06-07 14:18:37.073380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073387] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:13.521 [2024-06-07 14:18:37.073391] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:13.521 [2024-06-07 14:18:37.073397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073404] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:13.521 [2024-06-07 14:18:37.073408] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:13.521 [2024-06-07 14:18:37.073414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:13.521 [2024-06-07 14:18:37.073421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:13.521 [2024-06-07 14:18:37.073453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:13.521 ===================================================== 00:16:13.521 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:13.521 ===================================================== 00:16:13.521 Controller Capabilities/Features 00:16:13.521 ================================ 00:16:13.521 Vendor ID: 4e58 00:16:13.521 Subsystem Vendor ID: 4e58 00:16:13.521 Serial Number: SPDK1 00:16:13.521 Model Number: SPDK bdev Controller 00:16:13.521 Firmware Version: 24.09 00:16:13.521 Recommended Arb Burst: 6 00:16:13.521 IEEE OUI Identifier: 8d 6b 50 00:16:13.521 Multi-path I/O 00:16:13.521 May have multiple subsystem ports: Yes 00:16:13.521 May have multiple controllers: Yes 00:16:13.521 Associated with SR-IOV VF: No 00:16:13.521 Max Data Transfer Size: 131072 00:16:13.521 Max Number of Namespaces: 32 00:16:13.521 Max Number of I/O Queues: 127 00:16:13.521 NVMe Specification Version (VS): 1.3 00:16:13.521 NVMe Specification Version (Identify): 1.3 00:16:13.521 Maximum Queue Entries: 256 00:16:13.521 Contiguous Queues Required: Yes 00:16:13.521 Arbitration Mechanisms Supported 00:16:13.521 Weighted Round Robin: Not Supported 00:16:13.521 Vendor Specific: Not Supported 00:16:13.521 Reset Timeout: 15000 ms 00:16:13.521 Doorbell Stride: 4 bytes 00:16:13.521 NVM Subsystem Reset: Not Supported 00:16:13.521 Command Sets Supported 00:16:13.521 NVM Command Set: Supported 00:16:13.521 Boot Partition: Not Supported 00:16:13.521 Memory Page Size Minimum: 4096 bytes 00:16:13.521 Memory Page Size Maximum: 4096 bytes 00:16:13.521 Persistent Memory Region: Not Supported 00:16:13.521 Optional Asynchronous Events Supported 00:16:13.521 Namespace Attribute Notices: Supported 00:16:13.521 Firmware Activation Notices: Not Supported 00:16:13.521 ANA Change Notices: Not Supported 00:16:13.521 PLE Aggregate Log Change Notices: Not Supported 00:16:13.521 LBA Status Info Alert Notices: Not Supported 00:16:13.521 EGE Aggregate Log Change Notices: Not Supported 00:16:13.521 Normal NVM Subsystem Shutdown event: Not Supported 00:16:13.521 Zone Descriptor Change Notices: Not Supported 00:16:13.521 Discovery Log Change Notices: Not Supported 00:16:13.521 Controller Attributes 00:16:13.521 128-bit Host Identifier: Supported 00:16:13.521 Non-Operational Permissive Mode: Not Supported 00:16:13.521 NVM Sets: Not Supported 00:16:13.521 Read Recovery Levels: Not Supported 00:16:13.521 Endurance Groups: Not Supported 00:16:13.521 Predictable Latency Mode: Not Supported 00:16:13.521 Traffic Based Keep ALive: Not Supported 00:16:13.521 Namespace Granularity: Not Supported 00:16:13.521 SQ Associations: Not Supported 00:16:13.521 UUID List: Not Supported 00:16:13.521 Multi-Domain Subsystem: Not Supported 00:16:13.521 Fixed Capacity Management: Not Supported 00:16:13.521 Variable Capacity Management: Not Supported 00:16:13.521 Delete Endurance Group: Not Supported 00:16:13.521 Delete NVM Set: Not Supported 00:16:13.521 Extended LBA Formats Supported: Not Supported 00:16:13.521 Flexible Data Placement Supported: Not Supported 00:16:13.521 00:16:13.521 Controller Memory Buffer Support 00:16:13.521 ================================ 00:16:13.521 Supported: No 00:16:13.521 00:16:13.521 Persistent Memory Region Support 00:16:13.521 ================================ 00:16:13.521 Supported: No 00:16:13.521 00:16:13.521 Admin Command Set Attributes 00:16:13.521 ============================ 00:16:13.521 Security Send/Receive: Not Supported 00:16:13.521 Format NVM: Not Supported 00:16:13.521 Firmware Activate/Download: Not Supported 00:16:13.521 Namespace Management: Not Supported 00:16:13.521 Device Self-Test: Not Supported 00:16:13.521 Directives: Not Supported 00:16:13.521 NVMe-MI: Not Supported 00:16:13.521 Virtualization Management: Not Supported 00:16:13.521 Doorbell Buffer Config: Not Supported 00:16:13.521 Get LBA Status Capability: Not Supported 00:16:13.521 Command & Feature Lockdown Capability: Not Supported 00:16:13.521 Abort Command Limit: 4 00:16:13.521 Async Event Request Limit: 4 00:16:13.521 Number of Firmware Slots: N/A 00:16:13.521 Firmware Slot 1 Read-Only: N/A 00:16:13.521 Firmware Activation Without Reset: N/A 00:16:13.521 Multiple Update Detection Support: N/A 00:16:13.521 Firmware Update Granularity: No Information Provided 00:16:13.521 Per-Namespace SMART Log: No 00:16:13.521 Asymmetric Namespace Access Log Page: Not Supported 00:16:13.521 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:13.521 Command Effects Log Page: Supported 00:16:13.521 Get Log Page Extended Data: Supported 00:16:13.521 Telemetry Log Pages: Not Supported 00:16:13.521 Persistent Event Log Pages: Not Supported 00:16:13.521 Supported Log Pages Log Page: May Support 00:16:13.521 Commands Supported & Effects Log Page: Not Supported 00:16:13.521 Feature Identifiers & Effects Log Page:May Support 00:16:13.521 NVMe-MI Commands & Effects Log Page: May Support 00:16:13.521 Data Area 4 for Telemetry Log: Not Supported 00:16:13.521 Error Log Page Entries Supported: 128 00:16:13.521 Keep Alive: Supported 00:16:13.521 Keep Alive Granularity: 10000 ms 00:16:13.521 00:16:13.521 NVM Command Set Attributes 00:16:13.521 ========================== 00:16:13.521 Submission Queue Entry Size 00:16:13.521 Max: 64 00:16:13.521 Min: 64 00:16:13.521 Completion Queue Entry Size 00:16:13.521 Max: 16 00:16:13.521 Min: 16 00:16:13.521 Number of Namespaces: 32 00:16:13.521 Compare Command: Supported 00:16:13.521 Write Uncorrectable Command: Not Supported 00:16:13.521 Dataset Management Command: Supported 00:16:13.521 Write Zeroes Command: Supported 00:16:13.521 Set Features Save Field: Not Supported 00:16:13.521 Reservations: Not Supported 00:16:13.521 Timestamp: Not Supported 00:16:13.521 Copy: Supported 00:16:13.521 Volatile Write Cache: Present 00:16:13.521 Atomic Write Unit (Normal): 1 00:16:13.521 Atomic Write Unit (PFail): 1 00:16:13.521 Atomic Compare & Write Unit: 1 00:16:13.521 Fused Compare & Write: Supported 00:16:13.521 Scatter-Gather List 00:16:13.521 SGL Command Set: Supported (Dword aligned) 00:16:13.521 SGL Keyed: Not Supported 00:16:13.521 SGL Bit Bucket Descriptor: Not Supported 00:16:13.521 SGL Metadata Pointer: Not Supported 00:16:13.521 Oversized SGL: Not Supported 00:16:13.521 SGL Metadata Address: Not Supported 00:16:13.521 SGL Offset: Not Supported 00:16:13.521 Transport SGL Data Block: Not Supported 00:16:13.521 Replay Protected Memory Block: Not Supported 00:16:13.521 00:16:13.521 Firmware Slot Information 00:16:13.521 ========================= 00:16:13.521 Active slot: 1 00:16:13.522 Slot 1 Firmware Revision: 24.09 00:16:13.522 00:16:13.522 00:16:13.522 Commands Supported and Effects 00:16:13.522 ============================== 00:16:13.522 Admin Commands 00:16:13.522 -------------- 00:16:13.522 Get Log Page (02h): Supported 00:16:13.522 Identify (06h): Supported 00:16:13.522 Abort (08h): Supported 00:16:13.522 Set Features (09h): Supported 00:16:13.522 Get Features (0Ah): Supported 00:16:13.522 Asynchronous Event Request (0Ch): Supported 00:16:13.522 Keep Alive (18h): Supported 00:16:13.522 I/O Commands 00:16:13.522 ------------ 00:16:13.522 Flush (00h): Supported LBA-Change 00:16:13.522 Write (01h): Supported LBA-Change 00:16:13.522 Read (02h): Supported 00:16:13.522 Compare (05h): Supported 00:16:13.522 Write Zeroes (08h): Supported LBA-Change 00:16:13.522 Dataset Management (09h): Supported LBA-Change 00:16:13.522 Copy (19h): Supported LBA-Change 00:16:13.522 Unknown (79h): Supported LBA-Change 00:16:13.522 Unknown (7Ah): Supported 00:16:13.522 00:16:13.522 Error Log 00:16:13.522 ========= 00:16:13.522 00:16:13.522 Arbitration 00:16:13.522 =========== 00:16:13.522 Arbitration Burst: 1 00:16:13.522 00:16:13.522 Power Management 00:16:13.522 ================ 00:16:13.522 Number of Power States: 1 00:16:13.522 Current Power State: Power State #0 00:16:13.522 Power State #0: 00:16:13.522 Max Power: 0.00 W 00:16:13.522 Non-Operational State: Operational 00:16:13.522 Entry Latency: Not Reported 00:16:13.522 Exit Latency: Not Reported 00:16:13.522 Relative Read Throughput: 0 00:16:13.522 Relative Read Latency: 0 00:16:13.522 Relative Write Throughput: 0 00:16:13.522 Relative Write Latency: 0 00:16:13.522 Idle Power: Not Reported 00:16:13.522 Active Power: Not Reported 00:16:13.522 Non-Operational Permissive Mode: Not Supported 00:16:13.522 00:16:13.522 Health Information 00:16:13.522 ================== 00:16:13.522 Critical Warnings: 00:16:13.522 Available Spare Space: OK 00:16:13.522 Temperature: OK 00:16:13.522 Device Reliability: OK 00:16:13.522 Read Only: No 00:16:13.522 Volatile Memory Backup: OK 00:16:13.522 Current Temperature: 0 Kelvin (-2[2024-06-07 14:18:37.073553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:13.522 [2024-06-07 14:18:37.073564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:13.522 [2024-06-07 14:18:37.073589] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:13.522 [2024-06-07 14:18:37.073598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.522 [2024-06-07 14:18:37.073605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.522 [2024-06-07 14:18:37.073611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.522 [2024-06-07 14:18:37.073617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:13.522 [2024-06-07 14:18:37.073731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:13.522 [2024-06-07 14:18:37.073739] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:13.522 [2024-06-07 14:18:37.074734] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:13.522 [2024-06-07 14:18:37.074772] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:13.522 [2024-06-07 14:18:37.074781] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:13.522 [2024-06-07 14:18:37.075736] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:13.522 [2024-06-07 14:18:37.075747] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:13.522 [2024-06-07 14:18:37.075805] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:13.522 [2024-06-07 14:18:37.077767] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:13.522 73 Celsius) 00:16:13.522 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:13.522 Available Spare: 0% 00:16:13.522 Available Spare Threshold: 0% 00:16:13.522 Life Percentage Used: 0% 00:16:13.522 Data Units Read: 0 00:16:13.522 Data Units Written: 0 00:16:13.522 Host Read Commands: 0 00:16:13.522 Host Write Commands: 0 00:16:13.522 Controller Busy Time: 0 minutes 00:16:13.522 Power Cycles: 0 00:16:13.522 Power On Hours: 0 hours 00:16:13.522 Unsafe Shutdowns: 0 00:16:13.522 Unrecoverable Media Errors: 0 00:16:13.522 Lifetime Error Log Entries: 0 00:16:13.522 Warning Temperature Time: 0 minutes 00:16:13.522 Critical Temperature Time: 0 minutes 00:16:13.522 00:16:13.522 Number of Queues 00:16:13.522 ================ 00:16:13.522 Number of I/O Submission Queues: 127 00:16:13.522 Number of I/O Completion Queues: 127 00:16:13.522 00:16:13.522 Active Namespaces 00:16:13.522 ================= 00:16:13.522 Namespace ID:1 00:16:13.522 Error Recovery Timeout: Unlimited 00:16:13.522 Command Set Identifier: NVM (00h) 00:16:13.522 Deallocate: Supported 00:16:13.522 Deallocated/Unwritten Error: Not Supported 00:16:13.522 Deallocated Read Value: Unknown 00:16:13.522 Deallocate in Write Zeroes: Not Supported 00:16:13.522 Deallocated Guard Field: 0xFFFF 00:16:13.522 Flush: Supported 00:16:13.522 Reservation: Supported 00:16:13.522 Namespace Sharing Capabilities: Multiple Controllers 00:16:13.522 Size (in LBAs): 131072 (0GiB) 00:16:13.522 Capacity (in LBAs): 131072 (0GiB) 00:16:13.522 Utilization (in LBAs): 131072 (0GiB) 00:16:13.522 NGUID: 78CB31EEFCCA45B4BB99319B27E00669 00:16:13.522 UUID: 78cb31ee-fcca-45b4-bb99-319b27e00669 00:16:13.522 Thin Provisioning: Not Supported 00:16:13.522 Per-NS Atomic Units: Yes 00:16:13.522 Atomic Boundary Size (Normal): 0 00:16:13.522 Atomic Boundary Size (PFail): 0 00:16:13.522 Atomic Boundary Offset: 0 00:16:13.522 Maximum Single Source Range Length: 65535 00:16:13.522 Maximum Copy Length: 65535 00:16:13.522 Maximum Source Range Count: 1 00:16:13.522 NGUID/EUI64 Never Reused: No 00:16:13.522 Namespace Write Protected: No 00:16:13.522 Number of LBA Formats: 1 00:16:13.522 Current LBA Format: LBA Format #00 00:16:13.522 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:13.522 00:16:13.522 14:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:13.522 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.783 [2024-06-07 14:18:37.260839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.074 Initializing NVMe Controllers 00:16:19.074 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:19.074 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:19.074 Initialization complete. Launching workers. 00:16:19.074 ======================================================== 00:16:19.074 Latency(us) 00:16:19.074 Device Information : IOPS MiB/s Average min max 00:16:19.074 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39967.00 156.12 3205.38 836.06 7698.65 00:16:19.074 ======================================================== 00:16:19.074 Total : 39967.00 156.12 3205.38 836.06 7698.65 00:16:19.074 00:16:19.074 [2024-06-07 14:18:42.281461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.074 14:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:19.074 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.074 [2024-06-07 14:18:42.452290] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.376 Initializing NVMe Controllers 00:16:24.376 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.376 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:24.376 Initialization complete. Launching workers. 00:16:24.376 ======================================================== 00:16:24.376 Latency(us) 00:16:24.376 Device Information : IOPS MiB/s Average min max 00:16:24.376 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.68 62.71 7979.29 4988.24 9977.13 00:16:24.376 ======================================================== 00:16:24.376 Total : 16052.68 62.71 7979.29 4988.24 9977.13 00:16:24.376 00:16:24.376 [2024-06-07 14:18:47.495291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.376 14:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:24.376 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.376 [2024-06-07 14:18:47.689130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.661 [2024-06-07 14:18:52.766394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.661 Initializing NVMe Controllers 00:16:29.661 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:29.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:29.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:29.661 Initialization complete. Launching workers. 00:16:29.661 Starting thread on core 2 00:16:29.661 Starting thread on core 3 00:16:29.661 Starting thread on core 1 00:16:29.661 14:18:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:29.661 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.661 [2024-06-07 14:18:53.031602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.963 [2024-06-07 14:18:56.092962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.963 Initializing NVMe Controllers 00:16:32.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:32.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:32.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:32.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:32.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:32.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:32.963 Initialization complete. Launching workers. 00:16:32.963 Starting thread on core 1 with urgent priority queue 00:16:32.963 Starting thread on core 2 with urgent priority queue 00:16:32.963 Starting thread on core 3 with urgent priority queue 00:16:32.963 Starting thread on core 0 with urgent priority queue 00:16:32.963 SPDK bdev Controller (SPDK1 ) core 0: 9392.33 IO/s 10.65 secs/100000 ios 00:16:32.963 SPDK bdev Controller (SPDK1 ) core 1: 13317.33 IO/s 7.51 secs/100000 ios 00:16:32.963 SPDK bdev Controller (SPDK1 ) core 2: 10450.67 IO/s 9.57 secs/100000 ios 00:16:32.963 SPDK bdev Controller (SPDK1 ) core 3: 14244.33 IO/s 7.02 secs/100000 ios 00:16:32.963 ======================================================== 00:16:32.963 00:16:32.963 14:18:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.963 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.963 [2024-06-07 14:18:56.363663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:32.963 Initializing NVMe Controllers 00:16:32.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:32.963 Namespace ID: 1 size: 0GB 00:16:32.963 Initialization complete. 00:16:32.963 INFO: using host memory buffer for IO 00:16:32.963 Hello world! 00:16:32.963 [2024-06-07 14:18:56.400942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:32.963 14:18:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:32.963 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.241 [2024-06-07 14:18:56.667622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.182 Initializing NVMe Controllers 00:16:34.182 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.182 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.182 Initialization complete. Launching workers. 00:16:34.182 submit (in ns) avg, min, max = 9870.7, 3940.8, 3999222.5 00:16:34.182 complete (in ns) avg, min, max = 16819.1, 2379.2, 3998324.2 00:16:34.182 00:16:34.182 Submit histogram 00:16:34.182 ================ 00:16:34.182 Range in us Cumulative Count 00:16:34.182 3.920 - 3.947: 0.0929% ( 18) 00:16:34.182 3.947 - 3.973: 1.4140% ( 256) 00:16:34.182 3.973 - 4.000: 8.0869% ( 1293) 00:16:34.182 4.000 - 4.027: 18.6303% ( 2043) 00:16:34.182 4.027 - 4.053: 30.0253% ( 2208) 00:16:34.182 4.053 - 4.080: 42.6072% ( 2438) 00:16:34.182 4.080 - 4.107: 58.3991% ( 3060) 00:16:34.182 4.107 - 4.133: 74.1911% ( 3060) 00:16:34.182 4.133 - 4.160: 86.8504% ( 2453) 00:16:34.182 4.160 - 4.187: 94.1529% ( 1415) 00:16:34.182 4.187 - 4.213: 97.4970% ( 648) 00:16:34.182 4.213 - 4.240: 98.9833% ( 288) 00:16:34.182 4.240 - 4.267: 99.3497% ( 71) 00:16:34.182 4.267 - 4.293: 99.4220% ( 14) 00:16:34.182 4.293 - 4.320: 99.4375% ( 3) 00:16:34.182 4.373 - 4.400: 99.4426% ( 1) 00:16:34.182 4.587 - 4.613: 99.4478% ( 1) 00:16:34.182 4.747 - 4.773: 99.4530% ( 1) 00:16:34.182 4.960 - 4.987: 99.4581% ( 1) 00:16:34.182 5.040 - 5.067: 99.4633% ( 1) 00:16:34.182 5.227 - 5.253: 99.4684% ( 1) 00:16:34.182 5.307 - 5.333: 99.4736% ( 1) 00:16:34.182 5.440 - 5.467: 99.4788% ( 1) 00:16:34.182 5.733 - 5.760: 99.4891% ( 2) 00:16:34.182 5.840 - 5.867: 99.4942% ( 1) 00:16:34.182 5.893 - 5.920: 99.4994% ( 1) 00:16:34.182 5.947 - 5.973: 99.5097% ( 2) 00:16:34.182 5.973 - 6.000: 99.5149% ( 1) 00:16:34.182 6.000 - 6.027: 99.5200% ( 1) 00:16:34.182 6.027 - 6.053: 99.5252% ( 1) 00:16:34.182 6.080 - 6.107: 99.5355% ( 2) 00:16:34.182 6.107 - 6.133: 99.5510% ( 3) 00:16:34.182 6.133 - 6.160: 99.5613% ( 2) 00:16:34.182 6.160 - 6.187: 99.5665% ( 1) 00:16:34.182 6.187 - 6.213: 99.5871% ( 4) 00:16:34.182 6.213 - 6.240: 99.6026% ( 3) 00:16:34.182 6.267 - 6.293: 99.6129% ( 2) 00:16:34.182 6.293 - 6.320: 99.6181% ( 1) 00:16:34.182 6.373 - 6.400: 99.6336% ( 3) 00:16:34.182 6.400 - 6.427: 99.6491% ( 3) 00:16:34.182 6.427 - 6.453: 99.6646% ( 3) 00:16:34.182 6.480 - 6.507: 99.6697% ( 1) 00:16:34.182 6.507 - 6.533: 99.6800% ( 2) 00:16:34.182 6.587 - 6.613: 99.6904% ( 2) 00:16:34.182 6.640 - 6.667: 99.7007% ( 2) 00:16:34.182 6.693 - 6.720: 99.7162% ( 3) 00:16:34.182 6.747 - 6.773: 99.7213% ( 1) 00:16:34.182 6.773 - 6.800: 99.7471% ( 5) 00:16:34.182 6.800 - 6.827: 99.7574% ( 2) 00:16:34.182 6.880 - 6.933: 99.7781% ( 4) 00:16:34.182 6.987 - 7.040: 99.7884% ( 2) 00:16:34.182 7.040 - 7.093: 99.7936% ( 1) 00:16:34.182 7.253 - 7.307: 99.7987% ( 1) 00:16:34.182 7.307 - 7.360: 99.8039% ( 1) 00:16:34.182 7.413 - 7.467: 99.8091% ( 1) 00:16:34.182 7.467 - 7.520: 99.8142% ( 1) 00:16:34.182 7.573 - 7.627: 99.8194% ( 1) 00:16:34.182 7.627 - 7.680: 99.8245% ( 1) 00:16:34.182 7.680 - 7.733: 99.8297% ( 1) 00:16:34.182 8.107 - 8.160: 99.8349% ( 1) 00:16:34.182 8.373 - 8.427: 99.8400% ( 1) 00:16:34.182 8.587 - 8.640: 99.8452% ( 1) 00:16:34.182 9.600 - 9.653: 99.8503% ( 1) 00:16:34.182 12.373 - 12.427: 99.8555% ( 1) 00:16:34.182 3986.773 - 4014.080: 100.0000% ( 28) 00:16:34.182 00:16:34.182 Complete histogram 00:16:34.182 ================== 00:16:34.182 Range in us Cumulative Count 00:16:34.182 2.373 - 2.387: 0.0103% ( 2) 00:16:34.182 2.387 - 2.400: 0.0568% ( 9) 00:16:34.182 2.400 - 2.413: 1.2592% ( 233) 00:16:34.182 2.413 - 2.427: 1.3366% ( 15) 00:16:34.182 2.427 - 2.440: 1.5121% ( 34) 00:16:34.182 2.440 - 2.453: 6.2032% ( 909) 00:16:34.182 2.453 - 2.467: 57.8624% ( 10010) 00:16:34.182 2.467 - 2.480: 63.8024% ( 1151) 00:16:34.182 2.480 - 2.493: 74.5420% ( 2081) 00:16:34.182 2.493 - 2.507: 79.6821% ( 996) 00:16:34.182 2.507 - 2.520: 81.8238% ( 415) 00:16:34.182 2.520 - 2.533: 87.3819% ( 1077) 00:16:34.182 2.533 - [2024-06-07 14:18:57.689311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.182 2.547: 93.3220% ( 1151) 00:16:34.182 2.547 - 2.560: 96.4958% ( 615) 00:16:34.182 2.560 - 2.573: 98.0183% ( 295) 00:16:34.182 2.573 - 2.587: 98.8853% ( 168) 00:16:34.182 2.587 - 2.600: 99.3033% ( 81) 00:16:34.182 2.600 - 2.613: 99.3910% ( 17) 00:16:34.182 2.613 - 2.627: 99.4323% ( 8) 00:16:34.182 2.627 - 2.640: 99.4375% ( 1) 00:16:34.182 2.653 - 2.667: 99.4426% ( 1) 00:16:34.182 4.267 - 4.293: 99.4478% ( 1) 00:16:34.182 4.293 - 4.320: 99.4530% ( 1) 00:16:34.182 4.320 - 4.347: 99.4581% ( 1) 00:16:34.182 4.400 - 4.427: 99.4684% ( 2) 00:16:34.182 4.427 - 4.453: 99.4736% ( 1) 00:16:34.182 4.453 - 4.480: 99.4788% ( 1) 00:16:34.182 4.480 - 4.507: 99.4839% ( 1) 00:16:34.182 4.533 - 4.560: 99.4891% ( 1) 00:16:34.182 4.560 - 4.587: 99.4942% ( 1) 00:16:34.182 4.587 - 4.613: 99.4994% ( 1) 00:16:34.182 4.693 - 4.720: 99.5046% ( 1) 00:16:34.182 4.747 - 4.773: 99.5097% ( 1) 00:16:34.182 4.773 - 4.800: 99.5200% ( 2) 00:16:34.182 4.853 - 4.880: 99.5304% ( 2) 00:16:34.182 4.880 - 4.907: 99.5355% ( 1) 00:16:34.182 4.933 - 4.960: 99.5407% ( 1) 00:16:34.182 4.987 - 5.013: 99.5510% ( 2) 00:16:34.182 5.013 - 5.040: 99.5562% ( 1) 00:16:34.182 5.040 - 5.067: 99.5665% ( 2) 00:16:34.182 5.120 - 5.147: 99.5717% ( 1) 00:16:34.182 5.147 - 5.173: 99.5820% ( 2) 00:16:34.182 5.173 - 5.200: 99.5871% ( 1) 00:16:34.182 5.200 - 5.227: 99.5923% ( 1) 00:16:34.182 5.227 - 5.253: 99.5975% ( 1) 00:16:34.182 5.280 - 5.307: 99.6026% ( 1) 00:16:34.182 5.333 - 5.360: 99.6078% ( 1) 00:16:34.182 5.440 - 5.467: 99.6129% ( 1) 00:16:34.182 5.467 - 5.493: 99.6181% ( 1) 00:16:34.182 5.600 - 5.627: 99.6233% ( 1) 00:16:34.182 5.627 - 5.653: 99.6284% ( 1) 00:16:34.182 10.293 - 10.347: 99.6336% ( 1) 00:16:34.182 44.587 - 44.800: 99.6387% ( 1) 00:16:34.182 2129.920 - 2143.573: 99.6439% ( 1) 00:16:34.182 3986.773 - 4014.080: 100.0000% ( 69) 00:16:34.182 00:16:34.183 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:34.183 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:34.183 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:34.183 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:34.183 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.443 [ 00:16:34.443 { 00:16:34.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.443 "subtype": "Discovery", 00:16:34.443 "listen_addresses": [], 00:16:34.443 "allow_any_host": true, 00:16:34.443 "hosts": [] 00:16:34.443 }, 00:16:34.443 { 00:16:34.443 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.443 "subtype": "NVMe", 00:16:34.443 "listen_addresses": [ 00:16:34.443 { 00:16:34.443 "trtype": "VFIOUSER", 00:16:34.443 "adrfam": "IPv4", 00:16:34.443 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.443 "trsvcid": "0" 00:16:34.443 } 00:16:34.443 ], 00:16:34.443 "allow_any_host": true, 00:16:34.443 "hosts": [], 00:16:34.443 "serial_number": "SPDK1", 00:16:34.443 "model_number": "SPDK bdev Controller", 00:16:34.443 "max_namespaces": 32, 00:16:34.443 "min_cntlid": 1, 00:16:34.443 "max_cntlid": 65519, 00:16:34.444 "namespaces": [ 00:16:34.444 { 00:16:34.444 "nsid": 1, 00:16:34.444 "bdev_name": "Malloc1", 00:16:34.444 "name": "Malloc1", 00:16:34.444 "nguid": "78CB31EEFCCA45B4BB99319B27E00669", 00:16:34.444 "uuid": "78cb31ee-fcca-45b4-bb99-319b27e00669" 00:16:34.444 } 00:16:34.444 ] 00:16:34.444 }, 00:16:34.444 { 00:16:34.444 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.444 "subtype": "NVMe", 00:16:34.444 "listen_addresses": [ 00:16:34.444 { 00:16:34.444 "trtype": "VFIOUSER", 00:16:34.444 "adrfam": "IPv4", 00:16:34.444 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.444 "trsvcid": "0" 00:16:34.444 } 00:16:34.444 ], 00:16:34.444 "allow_any_host": true, 00:16:34.444 "hosts": [], 00:16:34.444 "serial_number": "SPDK2", 00:16:34.444 "model_number": "SPDK bdev Controller", 00:16:34.444 "max_namespaces": 32, 00:16:34.444 "min_cntlid": 1, 00:16:34.444 "max_cntlid": 65519, 00:16:34.444 "namespaces": [ 00:16:34.444 { 00:16:34.444 "nsid": 1, 00:16:34.444 "bdev_name": "Malloc2", 00:16:34.444 "name": "Malloc2", 00:16:34.444 "nguid": "DBC4D3580A6D47689C042D183F66E361", 00:16:34.444 "uuid": "dbc4d358-0a6d-4768-9c04-2d183f66e361" 00:16:34.444 } 00:16:34.444 ] 00:16:34.444 } 00:16:34.444 ] 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=466175 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:34.444 14:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:34.444 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.444 Malloc3 00:16:34.444 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:34.444 [2024-06-07 14:18:58.088411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.705 [2024-06-07 14:18:58.274671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.705 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:34.705 Asynchronous Event Request test 00:16:34.705 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.705 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:34.705 Registering asynchronous event callbacks... 00:16:34.705 Starting namespace attribute notice tests for all controllers... 00:16:34.705 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:34.705 aer_cb - Changed Namespace 00:16:34.705 Cleaning up... 00:16:34.967 [ 00:16:34.967 { 00:16:34.967 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.967 "subtype": "Discovery", 00:16:34.967 "listen_addresses": [], 00:16:34.967 "allow_any_host": true, 00:16:34.967 "hosts": [] 00:16:34.967 }, 00:16:34.967 { 00:16:34.967 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.967 "subtype": "NVMe", 00:16:34.967 "listen_addresses": [ 00:16:34.967 { 00:16:34.967 "trtype": "VFIOUSER", 00:16:34.967 "adrfam": "IPv4", 00:16:34.967 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.967 "trsvcid": "0" 00:16:34.967 } 00:16:34.967 ], 00:16:34.967 "allow_any_host": true, 00:16:34.967 "hosts": [], 00:16:34.967 "serial_number": "SPDK1", 00:16:34.967 "model_number": "SPDK bdev Controller", 00:16:34.967 "max_namespaces": 32, 00:16:34.967 "min_cntlid": 1, 00:16:34.967 "max_cntlid": 65519, 00:16:34.967 "namespaces": [ 00:16:34.967 { 00:16:34.967 "nsid": 1, 00:16:34.967 "bdev_name": "Malloc1", 00:16:34.967 "name": "Malloc1", 00:16:34.967 "nguid": "78CB31EEFCCA45B4BB99319B27E00669", 00:16:34.967 "uuid": "78cb31ee-fcca-45b4-bb99-319b27e00669" 00:16:34.967 }, 00:16:34.967 { 00:16:34.967 "nsid": 2, 00:16:34.967 "bdev_name": "Malloc3", 00:16:34.967 "name": "Malloc3", 00:16:34.967 "nguid": "18B48E6FE0594406A54D3B44FF02D0F0", 00:16:34.967 "uuid": "18b48e6f-e059-4406-a54d-3b44ff02d0f0" 00:16:34.967 } 00:16:34.967 ] 00:16:34.967 }, 00:16:34.967 { 00:16:34.967 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.967 "subtype": "NVMe", 00:16:34.967 "listen_addresses": [ 00:16:34.967 { 00:16:34.967 "trtype": "VFIOUSER", 00:16:34.967 "adrfam": "IPv4", 00:16:34.967 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.967 "trsvcid": "0" 00:16:34.967 } 00:16:34.967 ], 00:16:34.967 "allow_any_host": true, 00:16:34.967 "hosts": [], 00:16:34.967 "serial_number": "SPDK2", 00:16:34.967 "model_number": "SPDK bdev Controller", 00:16:34.967 "max_namespaces": 32, 00:16:34.967 "min_cntlid": 1, 00:16:34.967 "max_cntlid": 65519, 00:16:34.967 "namespaces": [ 00:16:34.967 { 00:16:34.967 "nsid": 1, 00:16:34.967 "bdev_name": "Malloc2", 00:16:34.967 "name": "Malloc2", 00:16:34.967 "nguid": "DBC4D3580A6D47689C042D183F66E361", 00:16:34.967 "uuid": "dbc4d358-0a6d-4768-9c04-2d183f66e361" 00:16:34.967 } 00:16:34.967 ] 00:16:34.967 } 00:16:34.967 ] 00:16:34.967 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 466175 00:16:34.967 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:34.967 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:34.967 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:34.967 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:34.967 [2024-06-07 14:18:58.498788] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:16:34.968 [2024-06-07 14:18:58.498849] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466428 ] 00:16:34.968 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.968 [2024-06-07 14:18:58.531716] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:34.968 [2024-06-07 14:18:58.538418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:34.968 [2024-06-07 14:18:58.538439] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f93a8d87000 00:16:34.968 [2024-06-07 14:18:58.539415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.542199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.542435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.543443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.544454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.545459] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.546461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.547472] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:34.968 [2024-06-07 14:18:58.548476] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:34.968 [2024-06-07 14:18:58.548487] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f93a7b4e000 00:16:34.968 [2024-06-07 14:18:58.549809] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:34.968 [2024-06-07 14:18:58.567005] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:34.968 [2024-06-07 14:18:58.567023] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:34.968 [2024-06-07 14:18:58.572098] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:34.968 [2024-06-07 14:18:58.572142] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:34.968 [2024-06-07 14:18:58.572224] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:34.968 [2024-06-07 14:18:58.572240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:34.968 [2024-06-07 14:18:58.572246] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:34.968 [2024-06-07 14:18:58.573104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:34.968 [2024-06-07 14:18:58.573113] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:34.968 [2024-06-07 14:18:58.573120] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:34.968 [2024-06-07 14:18:58.574111] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:34.968 [2024-06-07 14:18:58.574119] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:34.968 [2024-06-07 14:18:58.574127] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.575115] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:34.968 [2024-06-07 14:18:58.575124] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.576125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:34.968 [2024-06-07 14:18:58.576136] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:34.968 [2024-06-07 14:18:58.576141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.576147] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.576253] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:34.968 [2024-06-07 14:18:58.576258] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.576263] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:34.968 [2024-06-07 14:18:58.577129] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:34.968 [2024-06-07 14:18:58.578133] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:34.968 [2024-06-07 14:18:58.579136] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:34.968 [2024-06-07 14:18:58.580142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:34.968 [2024-06-07 14:18:58.580179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:34.968 [2024-06-07 14:18:58.581151] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:34.968 [2024-06-07 14:18:58.581159] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:34.968 [2024-06-07 14:18:58.581164] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.581185] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:34.968 [2024-06-07 14:18:58.581198] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.581212] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:34.968 [2024-06-07 14:18:58.581217] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:34.968 [2024-06-07 14:18:58.581229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:34.968 [2024-06-07 14:18:58.586293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:34.968 [2024-06-07 14:18:58.586305] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:34.968 [2024-06-07 14:18:58.586309] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:34.968 [2024-06-07 14:18:58.586314] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:34.968 [2024-06-07 14:18:58.586321] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:34.968 [2024-06-07 14:18:58.586326] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:34.968 [2024-06-07 14:18:58.586330] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:34.968 [2024-06-07 14:18:58.586337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.586345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.586356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:34.968 [2024-06-07 14:18:58.597200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:34.968 [2024-06-07 14:18:58.597212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.968 [2024-06-07 14:18:58.597220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.968 [2024-06-07 14:18:58.597228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.968 [2024-06-07 14:18:58.597236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.968 [2024-06-07 14:18:58.597241] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.597250] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.597259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:34.968 [2024-06-07 14:18:58.605200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:34.968 [2024-06-07 14:18:58.605208] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:34.968 [2024-06-07 14:18:58.605213] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.605220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.605225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:34.968 [2024-06-07 14:18:58.605234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:34.968 [2024-06-07 14:18:58.613200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:34.968 [2024-06-07 14:18:58.613253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:34.969 [2024-06-07 14:18:58.613261] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:34.969 [2024-06-07 14:18:58.613269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:34.969 [2024-06-07 14:18:58.613273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:34.969 [2024-06-07 14:18:58.613279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:35.231 [2024-06-07 14:18:58.621199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:35.231 [2024-06-07 14:18:58.621212] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:35.231 [2024-06-07 14:18:58.621221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.621228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.621235] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.231 [2024-06-07 14:18:58.621239] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.231 [2024-06-07 14:18:58.621245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.231 [2024-06-07 14:18:58.629199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:35.231 [2024-06-07 14:18:58.629220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.629228] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.629235] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:35.231 [2024-06-07 14:18:58.629239] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.231 [2024-06-07 14:18:58.629245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.231 [2024-06-07 14:18:58.637198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:35.231 [2024-06-07 14:18:58.637208] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.637215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.637224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.637230] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.637235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:35.231 [2024-06-07 14:18:58.637240] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:35.232 [2024-06-07 14:18:58.637244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:35.232 [2024-06-07 14:18:58.637249] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:35.232 [2024-06-07 14:18:58.637268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.645200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.645213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.653199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.653211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.661198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.661211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.669199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.669213] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:35.232 [2024-06-07 14:18:58.669217] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:35.232 [2024-06-07 14:18:58.669221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:35.232 [2024-06-07 14:18:58.669224] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:35.232 [2024-06-07 14:18:58.669230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:35.232 [2024-06-07 14:18:58.669237] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:35.232 [2024-06-07 14:18:58.669242] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:35.232 [2024-06-07 14:18:58.669248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.669254] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:35.232 [2024-06-07 14:18:58.669258] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:35.232 [2024-06-07 14:18:58.669264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.669272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:35.232 [2024-06-07 14:18:58.669276] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:35.232 [2024-06-07 14:18:58.669282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:35.232 [2024-06-07 14:18:58.677201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.677216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.677224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:35.232 [2024-06-07 14:18:58.677233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:35.232 ===================================================== 00:16:35.232 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.232 ===================================================== 00:16:35.232 Controller Capabilities/Features 00:16:35.232 ================================ 00:16:35.232 Vendor ID: 4e58 00:16:35.232 Subsystem Vendor ID: 4e58 00:16:35.232 Serial Number: SPDK2 00:16:35.232 Model Number: SPDK bdev Controller 00:16:35.232 Firmware Version: 24.09 00:16:35.232 Recommended Arb Burst: 6 00:16:35.232 IEEE OUI Identifier: 8d 6b 50 00:16:35.232 Multi-path I/O 00:16:35.232 May have multiple subsystem ports: Yes 00:16:35.232 May have multiple controllers: Yes 00:16:35.232 Associated with SR-IOV VF: No 00:16:35.232 Max Data Transfer Size: 131072 00:16:35.232 Max Number of Namespaces: 32 00:16:35.232 Max Number of I/O Queues: 127 00:16:35.232 NVMe Specification Version (VS): 1.3 00:16:35.232 NVMe Specification Version (Identify): 1.3 00:16:35.232 Maximum Queue Entries: 256 00:16:35.232 Contiguous Queues Required: Yes 00:16:35.232 Arbitration Mechanisms Supported 00:16:35.232 Weighted Round Robin: Not Supported 00:16:35.232 Vendor Specific: Not Supported 00:16:35.232 Reset Timeout: 15000 ms 00:16:35.232 Doorbell Stride: 4 bytes 00:16:35.232 NVM Subsystem Reset: Not Supported 00:16:35.232 Command Sets Supported 00:16:35.232 NVM Command Set: Supported 00:16:35.232 Boot Partition: Not Supported 00:16:35.232 Memory Page Size Minimum: 4096 bytes 00:16:35.232 Memory Page Size Maximum: 4096 bytes 00:16:35.232 Persistent Memory Region: Not Supported 00:16:35.232 Optional Asynchronous Events Supported 00:16:35.232 Namespace Attribute Notices: Supported 00:16:35.232 Firmware Activation Notices: Not Supported 00:16:35.232 ANA Change Notices: Not Supported 00:16:35.232 PLE Aggregate Log Change Notices: Not Supported 00:16:35.232 LBA Status Info Alert Notices: Not Supported 00:16:35.232 EGE Aggregate Log Change Notices: Not Supported 00:16:35.232 Normal NVM Subsystem Shutdown event: Not Supported 00:16:35.232 Zone Descriptor Change Notices: Not Supported 00:16:35.232 Discovery Log Change Notices: Not Supported 00:16:35.232 Controller Attributes 00:16:35.232 128-bit Host Identifier: Supported 00:16:35.232 Non-Operational Permissive Mode: Not Supported 00:16:35.232 NVM Sets: Not Supported 00:16:35.232 Read Recovery Levels: Not Supported 00:16:35.232 Endurance Groups: Not Supported 00:16:35.232 Predictable Latency Mode: Not Supported 00:16:35.232 Traffic Based Keep ALive: Not Supported 00:16:35.232 Namespace Granularity: Not Supported 00:16:35.232 SQ Associations: Not Supported 00:16:35.232 UUID List: Not Supported 00:16:35.232 Multi-Domain Subsystem: Not Supported 00:16:35.232 Fixed Capacity Management: Not Supported 00:16:35.232 Variable Capacity Management: Not Supported 00:16:35.232 Delete Endurance Group: Not Supported 00:16:35.232 Delete NVM Set: Not Supported 00:16:35.232 Extended LBA Formats Supported: Not Supported 00:16:35.232 Flexible Data Placement Supported: Not Supported 00:16:35.232 00:16:35.232 Controller Memory Buffer Support 00:16:35.232 ================================ 00:16:35.232 Supported: No 00:16:35.232 00:16:35.232 Persistent Memory Region Support 00:16:35.232 ================================ 00:16:35.232 Supported: No 00:16:35.232 00:16:35.232 Admin Command Set Attributes 00:16:35.232 ============================ 00:16:35.232 Security Send/Receive: Not Supported 00:16:35.232 Format NVM: Not Supported 00:16:35.232 Firmware Activate/Download: Not Supported 00:16:35.232 Namespace Management: Not Supported 00:16:35.232 Device Self-Test: Not Supported 00:16:35.232 Directives: Not Supported 00:16:35.232 NVMe-MI: Not Supported 00:16:35.232 Virtualization Management: Not Supported 00:16:35.232 Doorbell Buffer Config: Not Supported 00:16:35.232 Get LBA Status Capability: Not Supported 00:16:35.232 Command & Feature Lockdown Capability: Not Supported 00:16:35.232 Abort Command Limit: 4 00:16:35.232 Async Event Request Limit: 4 00:16:35.232 Number of Firmware Slots: N/A 00:16:35.232 Firmware Slot 1 Read-Only: N/A 00:16:35.232 Firmware Activation Without Reset: N/A 00:16:35.232 Multiple Update Detection Support: N/A 00:16:35.232 Firmware Update Granularity: No Information Provided 00:16:35.232 Per-Namespace SMART Log: No 00:16:35.232 Asymmetric Namespace Access Log Page: Not Supported 00:16:35.232 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:35.232 Command Effects Log Page: Supported 00:16:35.232 Get Log Page Extended Data: Supported 00:16:35.232 Telemetry Log Pages: Not Supported 00:16:35.232 Persistent Event Log Pages: Not Supported 00:16:35.232 Supported Log Pages Log Page: May Support 00:16:35.232 Commands Supported & Effects Log Page: Not Supported 00:16:35.232 Feature Identifiers & Effects Log Page:May Support 00:16:35.232 NVMe-MI Commands & Effects Log Page: May Support 00:16:35.232 Data Area 4 for Telemetry Log: Not Supported 00:16:35.232 Error Log Page Entries Supported: 128 00:16:35.232 Keep Alive: Supported 00:16:35.232 Keep Alive Granularity: 10000 ms 00:16:35.232 00:16:35.232 NVM Command Set Attributes 00:16:35.232 ========================== 00:16:35.232 Submission Queue Entry Size 00:16:35.232 Max: 64 00:16:35.232 Min: 64 00:16:35.232 Completion Queue Entry Size 00:16:35.233 Max: 16 00:16:35.233 Min: 16 00:16:35.233 Number of Namespaces: 32 00:16:35.233 Compare Command: Supported 00:16:35.233 Write Uncorrectable Command: Not Supported 00:16:35.233 Dataset Management Command: Supported 00:16:35.233 Write Zeroes Command: Supported 00:16:35.233 Set Features Save Field: Not Supported 00:16:35.233 Reservations: Not Supported 00:16:35.233 Timestamp: Not Supported 00:16:35.233 Copy: Supported 00:16:35.233 Volatile Write Cache: Present 00:16:35.233 Atomic Write Unit (Normal): 1 00:16:35.233 Atomic Write Unit (PFail): 1 00:16:35.233 Atomic Compare & Write Unit: 1 00:16:35.233 Fused Compare & Write: Supported 00:16:35.233 Scatter-Gather List 00:16:35.233 SGL Command Set: Supported (Dword aligned) 00:16:35.233 SGL Keyed: Not Supported 00:16:35.233 SGL Bit Bucket Descriptor: Not Supported 00:16:35.233 SGL Metadata Pointer: Not Supported 00:16:35.233 Oversized SGL: Not Supported 00:16:35.233 SGL Metadata Address: Not Supported 00:16:35.233 SGL Offset: Not Supported 00:16:35.233 Transport SGL Data Block: Not Supported 00:16:35.233 Replay Protected Memory Block: Not Supported 00:16:35.233 00:16:35.233 Firmware Slot Information 00:16:35.233 ========================= 00:16:35.233 Active slot: 1 00:16:35.233 Slot 1 Firmware Revision: 24.09 00:16:35.233 00:16:35.233 00:16:35.233 Commands Supported and Effects 00:16:35.233 ============================== 00:16:35.233 Admin Commands 00:16:35.233 -------------- 00:16:35.233 Get Log Page (02h): Supported 00:16:35.233 Identify (06h): Supported 00:16:35.233 Abort (08h): Supported 00:16:35.233 Set Features (09h): Supported 00:16:35.233 Get Features (0Ah): Supported 00:16:35.233 Asynchronous Event Request (0Ch): Supported 00:16:35.233 Keep Alive (18h): Supported 00:16:35.233 I/O Commands 00:16:35.233 ------------ 00:16:35.233 Flush (00h): Supported LBA-Change 00:16:35.233 Write (01h): Supported LBA-Change 00:16:35.233 Read (02h): Supported 00:16:35.233 Compare (05h): Supported 00:16:35.233 Write Zeroes (08h): Supported LBA-Change 00:16:35.233 Dataset Management (09h): Supported LBA-Change 00:16:35.233 Copy (19h): Supported LBA-Change 00:16:35.233 Unknown (79h): Supported LBA-Change 00:16:35.233 Unknown (7Ah): Supported 00:16:35.233 00:16:35.233 Error Log 00:16:35.233 ========= 00:16:35.233 00:16:35.233 Arbitration 00:16:35.233 =========== 00:16:35.233 Arbitration Burst: 1 00:16:35.233 00:16:35.233 Power Management 00:16:35.233 ================ 00:16:35.233 Number of Power States: 1 00:16:35.233 Current Power State: Power State #0 00:16:35.233 Power State #0: 00:16:35.233 Max Power: 0.00 W 00:16:35.233 Non-Operational State: Operational 00:16:35.233 Entry Latency: Not Reported 00:16:35.233 Exit Latency: Not Reported 00:16:35.233 Relative Read Throughput: 0 00:16:35.233 Relative Read Latency: 0 00:16:35.233 Relative Write Throughput: 0 00:16:35.233 Relative Write Latency: 0 00:16:35.233 Idle Power: Not Reported 00:16:35.233 Active Power: Not Reported 00:16:35.233 Non-Operational Permissive Mode: Not Supported 00:16:35.233 00:16:35.233 Health Information 00:16:35.233 ================== 00:16:35.233 Critical Warnings: 00:16:35.233 Available Spare Space: OK 00:16:35.233 Temperature: OK 00:16:35.233 Device Reliability: OK 00:16:35.233 Read Only: No 00:16:35.233 Volatile Memory Backup: OK 00:16:35.233 Current Temperature: 0 Kelvin (-2[2024-06-07 14:18:58.677333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:35.233 [2024-06-07 14:18:58.685201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:35.233 [2024-06-07 14:18:58.685229] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:35.233 [2024-06-07 14:18:58.685238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.233 [2024-06-07 14:18:58.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.233 [2024-06-07 14:18:58.685251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.233 [2024-06-07 14:18:58.685257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:35.233 [2024-06-07 14:18:58.685306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:35.233 [2024-06-07 14:18:58.685316] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:35.233 [2024-06-07 14:18:58.686314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.233 [2024-06-07 14:18:58.686362] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:35.233 [2024-06-07 14:18:58.686369] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:35.233 [2024-06-07 14:18:58.687323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:35.233 [2024-06-07 14:18:58.687334] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:35.233 [2024-06-07 14:18:58.687382] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:35.233 [2024-06-07 14:18:58.688755] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:35.233 73 Celsius) 00:16:35.233 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:35.233 Available Spare: 0% 00:16:35.233 Available Spare Threshold: 0% 00:16:35.233 Life Percentage Used: 0% 00:16:35.233 Data Units Read: 0 00:16:35.233 Data Units Written: 0 00:16:35.233 Host Read Commands: 0 00:16:35.233 Host Write Commands: 0 00:16:35.233 Controller Busy Time: 0 minutes 00:16:35.233 Power Cycles: 0 00:16:35.233 Power On Hours: 0 hours 00:16:35.233 Unsafe Shutdowns: 0 00:16:35.233 Unrecoverable Media Errors: 0 00:16:35.233 Lifetime Error Log Entries: 0 00:16:35.233 Warning Temperature Time: 0 minutes 00:16:35.233 Critical Temperature Time: 0 minutes 00:16:35.233 00:16:35.233 Number of Queues 00:16:35.233 ================ 00:16:35.233 Number of I/O Submission Queues: 127 00:16:35.233 Number of I/O Completion Queues: 127 00:16:35.233 00:16:35.233 Active Namespaces 00:16:35.233 ================= 00:16:35.233 Namespace ID:1 00:16:35.233 Error Recovery Timeout: Unlimited 00:16:35.233 Command Set Identifier: NVM (00h) 00:16:35.233 Deallocate: Supported 00:16:35.233 Deallocated/Unwritten Error: Not Supported 00:16:35.233 Deallocated Read Value: Unknown 00:16:35.233 Deallocate in Write Zeroes: Not Supported 00:16:35.233 Deallocated Guard Field: 0xFFFF 00:16:35.233 Flush: Supported 00:16:35.233 Reservation: Supported 00:16:35.233 Namespace Sharing Capabilities: Multiple Controllers 00:16:35.233 Size (in LBAs): 131072 (0GiB) 00:16:35.233 Capacity (in LBAs): 131072 (0GiB) 00:16:35.233 Utilization (in LBAs): 131072 (0GiB) 00:16:35.233 NGUID: DBC4D3580A6D47689C042D183F66E361 00:16:35.233 UUID: dbc4d358-0a6d-4768-9c04-2d183f66e361 00:16:35.233 Thin Provisioning: Not Supported 00:16:35.233 Per-NS Atomic Units: Yes 00:16:35.233 Atomic Boundary Size (Normal): 0 00:16:35.233 Atomic Boundary Size (PFail): 0 00:16:35.233 Atomic Boundary Offset: 0 00:16:35.233 Maximum Single Source Range Length: 65535 00:16:35.233 Maximum Copy Length: 65535 00:16:35.233 Maximum Source Range Count: 1 00:16:35.233 NGUID/EUI64 Never Reused: No 00:16:35.233 Namespace Write Protected: No 00:16:35.233 Number of LBA Formats: 1 00:16:35.233 Current LBA Format: LBA Format #00 00:16:35.233 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:35.233 00:16:35.233 14:18:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:35.233 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.233 [2024-06-07 14:18:58.871248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:40.519 Initializing NVMe Controllers 00:16:40.519 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:40.519 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:40.519 Initialization complete. Launching workers. 00:16:40.519 ======================================================== 00:16:40.519 Latency(us) 00:16:40.519 Device Information : IOPS MiB/s Average min max 00:16:40.519 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40088.88 156.60 3192.75 831.57 6823.19 00:16:40.519 ======================================================== 00:16:40.519 Total : 40088.88 156.60 3192.75 831.57 6823.19 00:16:40.519 00:16:40.519 [2024-06-07 14:19:03.976379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:40.519 14:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:40.519 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.519 [2024-06-07 14:19:04.154944] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:45.808 Initializing NVMe Controllers 00:16:45.808 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:45.808 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:45.808 Initialization complete. Launching workers. 00:16:45.808 ======================================================== 00:16:45.808 Latency(us) 00:16:45.808 Device Information : IOPS MiB/s Average min max 00:16:45.808 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36222.44 141.49 3533.27 1093.89 8626.13 00:16:45.808 ======================================================== 00:16:45.808 Total : 36222.44 141.49 3533.27 1093.89 8626.13 00:16:45.808 00:16:45.808 [2024-06-07 14:19:09.174454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:45.808 14:19:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:45.808 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.808 [2024-06-07 14:19:09.364648] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.113 [2024-06-07 14:19:14.508288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.113 Initializing NVMe Controllers 00:16:51.113 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:51.113 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:51.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:51.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:51.113 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:51.113 Initialization complete. Launching workers. 00:16:51.113 Starting thread on core 2 00:16:51.113 Starting thread on core 3 00:16:51.113 Starting thread on core 1 00:16:51.113 14:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:51.113 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.373 [2024-06-07 14:19:14.773888] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.673 [2024-06-07 14:19:17.971358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:54.673 Initializing NVMe Controllers 00:16:54.673 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.674 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.674 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:54.674 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:54.674 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:54.674 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:54.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:54.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:54.674 Initialization complete. Launching workers. 00:16:54.674 Starting thread on core 1 with urgent priority queue 00:16:54.674 Starting thread on core 2 with urgent priority queue 00:16:54.674 Starting thread on core 3 with urgent priority queue 00:16:54.674 Starting thread on core 0 with urgent priority queue 00:16:54.674 SPDK bdev Controller (SPDK2 ) core 0: 10669.67 IO/s 9.37 secs/100000 ios 00:16:54.674 SPDK bdev Controller (SPDK2 ) core 1: 11058.00 IO/s 9.04 secs/100000 ios 00:16:54.674 SPDK bdev Controller (SPDK2 ) core 2: 11073.33 IO/s 9.03 secs/100000 ios 00:16:54.674 SPDK bdev Controller (SPDK2 ) core 3: 11556.33 IO/s 8.65 secs/100000 ios 00:16:54.674 ======================================================== 00:16:54.674 00:16:54.674 14:19:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:54.674 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.674 [2024-06-07 14:19:18.236823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.674 Initializing NVMe Controllers 00:16:54.674 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.674 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:54.674 Namespace ID: 1 size: 0GB 00:16:54.674 Initialization complete. 00:16:54.674 INFO: using host memory buffer for IO 00:16:54.674 Hello world! 00:16:54.674 [2024-06-07 14:19:18.246891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:54.674 14:19:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:54.935 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.935 [2024-06-07 14:19:18.518239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.320 Initializing NVMe Controllers 00:16:56.320 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.320 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.320 Initialization complete. Launching workers. 00:16:56.320 submit (in ns) avg, min, max = 8904.4, 3923.3, 4002205.8 00:16:56.320 complete (in ns) avg, min, max = 17024.6, 2379.2, 6990727.5 00:16:56.320 00:16:56.320 Submit histogram 00:16:56.320 ================ 00:16:56.320 Range in us Cumulative Count 00:16:56.320 3.920 - 3.947: 0.5025% ( 96) 00:16:56.320 3.947 - 3.973: 4.3758% ( 740) 00:16:56.320 3.973 - 4.000: 10.7720% ( 1222) 00:16:56.320 4.000 - 4.027: 19.9005% ( 1744) 00:16:56.320 4.027 - 4.053: 31.1594% ( 2151) 00:16:56.320 4.053 - 4.080: 44.3392% ( 2518) 00:16:56.320 4.080 - 4.107: 59.4504% ( 2887) 00:16:56.320 4.107 - 4.133: 76.2209% ( 3204) 00:16:56.320 4.133 - 4.160: 88.6679% ( 2378) 00:16:56.320 4.160 - 4.187: 95.4672% ( 1299) 00:16:56.320 4.187 - 4.213: 98.2204% ( 526) 00:16:56.320 4.213 - 4.240: 99.1364% ( 175) 00:16:56.320 4.240 - 4.267: 99.3457% ( 40) 00:16:56.320 4.267 - 4.293: 99.3981% ( 10) 00:16:56.320 4.293 - 4.320: 99.4242% ( 5) 00:16:56.320 4.320 - 4.347: 99.4556% ( 6) 00:16:56.320 4.347 - 4.373: 99.4661% ( 2) 00:16:56.320 4.373 - 4.400: 99.4870% ( 4) 00:16:56.320 4.400 - 4.427: 99.4975% ( 2) 00:16:56.320 4.427 - 4.453: 99.5027% ( 1) 00:16:56.320 4.800 - 4.827: 99.5080% ( 1) 00:16:56.320 4.987 - 5.013: 99.5132% ( 1) 00:16:56.320 5.227 - 5.253: 99.5185% ( 1) 00:16:56.320 5.360 - 5.387: 99.5237% ( 1) 00:16:56.320 5.627 - 5.653: 99.5289% ( 1) 00:16:56.320 5.840 - 5.867: 99.5446% ( 3) 00:16:56.320 6.000 - 6.027: 99.5499% ( 1) 00:16:56.320 6.027 - 6.053: 99.5551% ( 1) 00:16:56.320 6.053 - 6.080: 99.5656% ( 2) 00:16:56.321 6.080 - 6.107: 99.5708% ( 1) 00:16:56.321 6.107 - 6.133: 99.5813% ( 2) 00:16:56.321 6.160 - 6.187: 99.5865% ( 1) 00:16:56.321 6.187 - 6.213: 99.5917% ( 1) 00:16:56.321 6.213 - 6.240: 99.6022% ( 2) 00:16:56.321 6.267 - 6.293: 99.6074% ( 1) 00:16:56.321 6.320 - 6.347: 99.6231% ( 3) 00:16:56.321 6.373 - 6.400: 99.6336% ( 2) 00:16:56.321 6.400 - 6.427: 99.6388% ( 1) 00:16:56.321 6.453 - 6.480: 99.6441% ( 1) 00:16:56.321 6.480 - 6.507: 99.6493% ( 1) 00:16:56.321 6.533 - 6.560: 99.6545% ( 1) 00:16:56.321 6.560 - 6.587: 99.6755% ( 4) 00:16:56.321 6.587 - 6.613: 99.6807% ( 1) 00:16:56.321 6.640 - 6.667: 99.6859% ( 1) 00:16:56.321 6.667 - 6.693: 99.6912% ( 1) 00:16:56.321 6.720 - 6.747: 99.6964% ( 1) 00:16:56.321 6.747 - 6.773: 99.7121% ( 3) 00:16:56.321 6.800 - 6.827: 99.7226% ( 2) 00:16:56.321 6.827 - 6.880: 99.7435% ( 4) 00:16:56.321 6.880 - 6.933: 99.7540% ( 2) 00:16:56.321 6.933 - 6.987: 99.7697% ( 3) 00:16:56.321 6.987 - 7.040: 99.7802% ( 2) 00:16:56.321 7.093 - 7.147: 99.7959% ( 3) 00:16:56.321 7.147 - 7.200: 99.8011% ( 1) 00:16:56.321 7.200 - 7.253: 99.8116% ( 2) 00:16:56.321 7.253 - 7.307: 99.8168% ( 1) 00:16:56.321 7.307 - 7.360: 99.8220% ( 1) 00:16:56.321 7.360 - 7.413: 99.8273% ( 1) 00:16:56.321 7.413 - 7.467: 99.8325% ( 1) 00:16:56.321 7.573 - 7.627: 99.8430% ( 2) 00:16:56.321 7.680 - 7.733: 99.8482% ( 1) 00:16:56.321 7.733 - 7.787: 99.8534% ( 1) 00:16:56.321 7.840 - 7.893: 99.8639% ( 2) 00:16:56.321 9.227 - 9.280: 99.8691% ( 1) 00:16:56.321 10.507 - 10.560: 99.8744% ( 1) 00:16:56.321 11.733 - 11.787: 99.8796% ( 1) 00:16:56.321 3986.773 - 4014.080: 100.0000% ( 23) 00:16:56.321 00:16:56.321 Complete histogram 00:16:56.321 ================== 00:16:56.321 Range in us Cumulative Count 00:16:56.321 2.373 - 2.387: 0.0052% ( 1) 00:16:56.321 2.387 - 2.400: 0.0366% ( 6) 00:16:56.321 2.400 - 2.413: 1.1358% ( 210) 00:16:56.321 2.413 - 2.427: 1.2562% ( 23) 00:16:56.321 2.427 - 2.440: 1.4604% ( 39) 00:16:56.321 2.440 - 2.453: 33.2635% ( 6076) 00:16:56.321 2.453 - 2.467: 61.7796% ( 5448) 00:16:56.321 2.467 - 2.480: 69.0448% ( 1388) 00:16:56.321 2.480 - 2.493: 75.8964% ( 1309) 00:16:56.321 2.493 - 2.507: 80.4606% ( 872) 00:16:56.321 2.507 - [2024-06-07 14:19:19.614845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.321 2.520: 82.8788% ( 462) 00:16:56.321 2.520 - 2.533: 88.1654% ( 1010) 00:16:56.321 2.533 - 2.547: 93.8655% ( 1089) 00:16:56.321 2.547 - 2.560: 96.6815% ( 538) 00:16:56.321 2.560 - 2.573: 98.1000% ( 271) 00:16:56.321 2.573 - 2.587: 98.9793% ( 168) 00:16:56.321 2.587 - 2.600: 99.3091% ( 63) 00:16:56.321 2.600 - 2.613: 99.3824% ( 14) 00:16:56.321 2.613 - 2.627: 99.3981% ( 3) 00:16:56.321 4.213 - 4.240: 99.4033% ( 1) 00:16:56.321 4.293 - 4.320: 99.4085% ( 1) 00:16:56.321 4.400 - 4.427: 99.4190% ( 2) 00:16:56.321 4.427 - 4.453: 99.4242% ( 1) 00:16:56.321 4.587 - 4.613: 99.4295% ( 1) 00:16:56.321 4.613 - 4.640: 99.4347% ( 1) 00:16:56.321 4.640 - 4.667: 99.4452% ( 2) 00:16:56.321 4.667 - 4.693: 99.4504% ( 1) 00:16:56.321 4.693 - 4.720: 99.4556% ( 1) 00:16:56.321 4.800 - 4.827: 99.4609% ( 1) 00:16:56.321 4.933 - 4.960: 99.4661% ( 1) 00:16:56.321 5.013 - 5.040: 99.4766% ( 2) 00:16:56.321 5.040 - 5.067: 99.4818% ( 1) 00:16:56.321 5.067 - 5.093: 99.4870% ( 1) 00:16:56.321 5.093 - 5.120: 99.4923% ( 1) 00:16:56.321 5.173 - 5.200: 99.4975% ( 1) 00:16:56.321 5.307 - 5.333: 99.5027% ( 1) 00:16:56.321 5.333 - 5.360: 99.5132% ( 2) 00:16:56.321 5.360 - 5.387: 99.5185% ( 1) 00:16:56.321 5.387 - 5.413: 99.5237% ( 1) 00:16:56.321 5.440 - 5.467: 99.5289% ( 1) 00:16:56.321 5.467 - 5.493: 99.5342% ( 1) 00:16:56.321 5.493 - 5.520: 99.5394% ( 1) 00:16:56.321 5.573 - 5.600: 99.5446% ( 1) 00:16:56.321 5.653 - 5.680: 99.5499% ( 1) 00:16:56.321 5.680 - 5.707: 99.5551% ( 1) 00:16:56.321 5.707 - 5.733: 99.5603% ( 1) 00:16:56.321 5.760 - 5.787: 99.5656% ( 1) 00:16:56.321 5.787 - 5.813: 99.5760% ( 2) 00:16:56.321 5.840 - 5.867: 99.5813% ( 1) 00:16:56.321 5.867 - 5.893: 99.5917% ( 2) 00:16:56.321 5.947 - 5.973: 99.5970% ( 1) 00:16:56.321 6.053 - 6.080: 99.6022% ( 1) 00:16:56.321 6.187 - 6.213: 99.6074% ( 1) 00:16:56.321 6.347 - 6.373: 99.6179% ( 2) 00:16:56.321 6.400 - 6.427: 99.6231% ( 1) 00:16:56.321 6.453 - 6.480: 99.6284% ( 1) 00:16:56.321 7.360 - 7.413: 99.6336% ( 1) 00:16:56.321 7.573 - 7.627: 99.6388% ( 1) 00:16:56.321 3003.733 - 3017.387: 99.6441% ( 1) 00:16:56.321 3031.040 - 3044.693: 99.6493% ( 1) 00:16:56.321 3986.773 - 4014.080: 99.9843% ( 64) 00:16:56.321 4123.307 - 4150.613: 99.9895% ( 1) 00:16:56.321 4969.813 - 4997.120: 99.9948% ( 1) 00:16:56.321 6990.507 - 7045.120: 100.0000% ( 1) 00:16:56.321 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:56.321 [ 00:16:56.321 { 00:16:56.321 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:56.321 "subtype": "Discovery", 00:16:56.321 "listen_addresses": [], 00:16:56.321 "allow_any_host": true, 00:16:56.321 "hosts": [] 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:56.321 "subtype": "NVMe", 00:16:56.321 "listen_addresses": [ 00:16:56.321 { 00:16:56.321 "trtype": "VFIOUSER", 00:16:56.321 "adrfam": "IPv4", 00:16:56.321 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:56.321 "trsvcid": "0" 00:16:56.321 } 00:16:56.321 ], 00:16:56.321 "allow_any_host": true, 00:16:56.321 "hosts": [], 00:16:56.321 "serial_number": "SPDK1", 00:16:56.321 "model_number": "SPDK bdev Controller", 00:16:56.321 "max_namespaces": 32, 00:16:56.321 "min_cntlid": 1, 00:16:56.321 "max_cntlid": 65519, 00:16:56.321 "namespaces": [ 00:16:56.321 { 00:16:56.321 "nsid": 1, 00:16:56.321 "bdev_name": "Malloc1", 00:16:56.321 "name": "Malloc1", 00:16:56.321 "nguid": "78CB31EEFCCA45B4BB99319B27E00669", 00:16:56.321 "uuid": "78cb31ee-fcca-45b4-bb99-319b27e00669" 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "nsid": 2, 00:16:56.321 "bdev_name": "Malloc3", 00:16:56.321 "name": "Malloc3", 00:16:56.321 "nguid": "18B48E6FE0594406A54D3B44FF02D0F0", 00:16:56.321 "uuid": "18b48e6f-e059-4406-a54d-3b44ff02d0f0" 00:16:56.321 } 00:16:56.321 ] 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:56.321 "subtype": "NVMe", 00:16:56.321 "listen_addresses": [ 00:16:56.321 { 00:16:56.321 "trtype": "VFIOUSER", 00:16:56.321 "adrfam": "IPv4", 00:16:56.321 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:56.321 "trsvcid": "0" 00:16:56.321 } 00:16:56.321 ], 00:16:56.321 "allow_any_host": true, 00:16:56.321 "hosts": [], 00:16:56.321 "serial_number": "SPDK2", 00:16:56.321 "model_number": "SPDK bdev Controller", 00:16:56.321 "max_namespaces": 32, 00:16:56.321 "min_cntlid": 1, 00:16:56.321 "max_cntlid": 65519, 00:16:56.321 "namespaces": [ 00:16:56.321 { 00:16:56.321 "nsid": 1, 00:16:56.321 "bdev_name": "Malloc2", 00:16:56.321 "name": "Malloc2", 00:16:56.321 "nguid": "DBC4D3580A6D47689C042D183F66E361", 00:16:56.321 "uuid": "dbc4d358-0a6d-4768-9c04-2d183f66e361" 00:16:56.321 } 00:16:56.321 ] 00:16:56.321 } 00:16:56.321 ] 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=470461 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:16:56.321 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:56.322 14:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:56.322 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.582 Malloc4 00:16:56.582 [2024-06-07 14:19:20.008587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:56.582 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:56.582 [2024-06-07 14:19:20.145534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:56.582 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:56.582 Asynchronous Event Request test 00:16:56.582 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.582 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:56.582 Registering asynchronous event callbacks... 00:16:56.582 Starting namespace attribute notice tests for all controllers... 00:16:56.582 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:56.582 aer_cb - Changed Namespace 00:16:56.582 Cleaning up... 00:16:56.843 [ 00:16:56.843 { 00:16:56.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:56.843 "subtype": "Discovery", 00:16:56.843 "listen_addresses": [], 00:16:56.843 "allow_any_host": true, 00:16:56.843 "hosts": [] 00:16:56.843 }, 00:16:56.843 { 00:16:56.843 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:56.843 "subtype": "NVMe", 00:16:56.843 "listen_addresses": [ 00:16:56.843 { 00:16:56.843 "trtype": "VFIOUSER", 00:16:56.843 "adrfam": "IPv4", 00:16:56.843 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:56.843 "trsvcid": "0" 00:16:56.843 } 00:16:56.843 ], 00:16:56.843 "allow_any_host": true, 00:16:56.843 "hosts": [], 00:16:56.843 "serial_number": "SPDK1", 00:16:56.843 "model_number": "SPDK bdev Controller", 00:16:56.843 "max_namespaces": 32, 00:16:56.843 "min_cntlid": 1, 00:16:56.843 "max_cntlid": 65519, 00:16:56.843 "namespaces": [ 00:16:56.843 { 00:16:56.843 "nsid": 1, 00:16:56.843 "bdev_name": "Malloc1", 00:16:56.843 "name": "Malloc1", 00:16:56.843 "nguid": "78CB31EEFCCA45B4BB99319B27E00669", 00:16:56.843 "uuid": "78cb31ee-fcca-45b4-bb99-319b27e00669" 00:16:56.843 }, 00:16:56.843 { 00:16:56.843 "nsid": 2, 00:16:56.843 "bdev_name": "Malloc3", 00:16:56.843 "name": "Malloc3", 00:16:56.843 "nguid": "18B48E6FE0594406A54D3B44FF02D0F0", 00:16:56.843 "uuid": "18b48e6f-e059-4406-a54d-3b44ff02d0f0" 00:16:56.843 } 00:16:56.843 ] 00:16:56.843 }, 00:16:56.843 { 00:16:56.843 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:56.843 "subtype": "NVMe", 00:16:56.843 "listen_addresses": [ 00:16:56.843 { 00:16:56.843 "trtype": "VFIOUSER", 00:16:56.843 "adrfam": "IPv4", 00:16:56.843 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:56.843 "trsvcid": "0" 00:16:56.843 } 00:16:56.843 ], 00:16:56.843 "allow_any_host": true, 00:16:56.843 "hosts": [], 00:16:56.843 "serial_number": "SPDK2", 00:16:56.843 "model_number": "SPDK bdev Controller", 00:16:56.843 "max_namespaces": 32, 00:16:56.843 "min_cntlid": 1, 00:16:56.843 "max_cntlid": 65519, 00:16:56.843 "namespaces": [ 00:16:56.843 { 00:16:56.843 "nsid": 1, 00:16:56.843 "bdev_name": "Malloc2", 00:16:56.843 "name": "Malloc2", 00:16:56.843 "nguid": "DBC4D3580A6D47689C042D183F66E361", 00:16:56.843 "uuid": "dbc4d358-0a6d-4768-9c04-2d183f66e361" 00:16:56.843 }, 00:16:56.843 { 00:16:56.843 "nsid": 2, 00:16:56.843 "bdev_name": "Malloc4", 00:16:56.843 "name": "Malloc4", 00:16:56.843 "nguid": "662C15B4B4674877BCDECE8F20F0CCDC", 00:16:56.843 "uuid": "662c15b4-b467-4877-bcde-ce8f20f0ccdc" 00:16:56.843 } 00:16:56.843 ] 00:16:56.843 } 00:16:56.843 ] 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 470461 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 461386 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 461386 ']' 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 461386 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 461386 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 461386' 00:16:56.843 killing process with pid 461386 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 461386 00:16:56.843 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 461386 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=470650 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 470650' 00:16:57.103 Process pid: 470650 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 470650 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 470650 ']' 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:57.103 14:19:20 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:57.103 [2024-06-07 14:19:20.620160] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:57.103 [2024-06-07 14:19:20.621138] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:16:57.103 [2024-06-07 14:19:20.621180] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.103 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.104 [2024-06-07 14:19:20.687749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.104 [2024-06-07 14:19:20.720071] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.104 [2024-06-07 14:19:20.720113] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.104 [2024-06-07 14:19:20.720120] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.104 [2024-06-07 14:19:20.720126] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.104 [2024-06-07 14:19:20.720132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.104 [2024-06-07 14:19:20.720234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.104 [2024-06-07 14:19:20.720466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.104 [2024-06-07 14:19:20.720466] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.104 [2024-06-07 14:19:20.720306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.364 [2024-06-07 14:19:20.780842] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:57.364 [2024-06-07 14:19:20.780962] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:57.364 [2024-06-07 14:19:20.781924] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:57.364 [2024-06-07 14:19:20.782156] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:57.364 [2024-06-07 14:19:20.782302] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:57.970 14:19:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:57.970 14:19:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:16:57.970 14:19:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:58.910 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:58.910 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:58.910 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:58.911 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:58.911 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:59.170 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:59.170 Malloc1 00:16:59.170 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:59.431 14:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:59.431 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:59.690 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:59.690 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:59.690 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:59.950 Malloc2 00:16:59.950 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:59.950 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:00.212 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:00.472 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 470650 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 470650 ']' 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 470650 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 470650 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 470650' 00:17:00.473 killing process with pid 470650 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 470650 00:17:00.473 14:19:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 470650 00:17:00.473 14:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:00.473 14:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:00.473 00:17:00.473 real 0m50.655s 00:17:00.473 user 3m21.076s 00:17:00.473 sys 0m2.987s 00:17:00.473 14:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.473 14:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:00.473 ************************************ 00:17:00.473 END TEST nvmf_vfio_user 00:17:00.473 ************************************ 00:17:00.733 14:19:24 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:00.733 14:19:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:00.733 14:19:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:00.733 14:19:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.733 ************************************ 00:17:00.733 START TEST nvmf_vfio_user_nvme_compliance 00:17:00.733 ************************************ 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:00.733 * Looking for test storage... 00:17:00.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.733 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=471538 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 471538' 00:17:00.734 Process pid: 471538 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 471538 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 471538 ']' 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:00.734 14:19:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:00.734 [2024-06-07 14:19:24.375175] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:17:00.734 [2024-06-07 14:19:24.375257] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.994 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.994 [2024-06-07 14:19:24.446871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:00.994 [2024-06-07 14:19:24.486636] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.994 [2024-06-07 14:19:24.486676] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.994 [2024-06-07 14:19:24.486684] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.994 [2024-06-07 14:19:24.486690] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.994 [2024-06-07 14:19:24.486696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.994 [2024-06-07 14:19:24.486835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.994 [2024-06-07 14:19:24.486961] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.994 [2024-06-07 14:19:24.486963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.564 14:19:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:01.564 14:19:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:17:01.564 14:19:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 malloc0 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:02.948 14:19:26 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:02.948 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.948 00:17:02.948 00:17:02.948 CUnit - A unit testing framework for C - Version 2.1-3 00:17:02.948 http://cunit.sourceforge.net/ 00:17:02.948 00:17:02.948 00:17:02.948 Suite: nvme_compliance 00:17:02.948 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-07 14:19:26.402661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.948 [2024-06-07 14:19:26.403956] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:02.948 [2024-06-07 14:19:26.403966] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:02.948 [2024-06-07 14:19:26.403970] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:02.948 [2024-06-07 14:19:26.405676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.948 passed 00:17:02.948 Test: admin_identify_ctrlr_verify_fused ...[2024-06-07 14:19:26.501232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.948 [2024-06-07 14:19:26.504247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.948 passed 00:17:03.210 Test: admin_identify_ns ...[2024-06-07 14:19:26.599406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.210 [2024-06-07 14:19:26.659203] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:03.210 [2024-06-07 14:19:26.667207] ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:03.210 [2024-06-07 14:19:26.688313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.210 passed 00:17:03.210 Test: admin_get_features_mandatory_features ...[2024-06-07 14:19:26.780316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.210 [2024-06-07 14:19:26.783337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.210 passed 00:17:03.470 Test: admin_get_features_optional_features ...[2024-06-07 14:19:26.878879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.470 [2024-06-07 14:19:26.881888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.470 passed 00:17:03.470 Test: admin_set_features_number_of_queues ...[2024-06-07 14:19:26.976063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.470 [2024-06-07 14:19:27.079310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.731 passed 00:17:03.731 Test: admin_get_log_page_mandatory_logs ...[2024-06-07 14:19:27.173001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.731 [2024-06-07 14:19:27.176015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.731 passed 00:17:03.731 Test: admin_get_log_page_with_lpo ...[2024-06-07 14:19:27.268443] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.731 [2024-06-07 14:19:27.336209] ctrlr.c:2656:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:03.731 [2024-06-07 14:19:27.349273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.990 passed 00:17:03.990 Test: fabric_property_get ...[2024-06-07 14:19:27.442340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.990 [2024-06-07 14:19:27.443567] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:03.990 [2024-06-07 14:19:27.446365] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.990 passed 00:17:03.990 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-07 14:19:27.538890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:03.990 [2024-06-07 14:19:27.540142] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:03.990 [2024-06-07 14:19:27.541917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.990 passed 00:17:03.990 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-07 14:19:27.636464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.250 [2024-06-07 14:19:27.719207] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.250 [2024-06-07 14:19:27.735211] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.250 [2024-06-07 14:19:27.740297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.250 passed 00:17:04.250 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-07 14:19:27.832294] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.250 [2024-06-07 14:19:27.833514] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:04.250 [2024-06-07 14:19:27.835316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.250 passed 00:17:04.510 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-07 14:19:27.930453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.510 [2024-06-07 14:19:28.006212] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:04.510 [2024-06-07 14:19:28.030203] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:04.510 [2024-06-07 14:19:28.035282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.510 passed 00:17:04.510 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-07 14:19:28.126874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.510 [2024-06-07 14:19:28.128103] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:04.510 [2024-06-07 14:19:28.128121] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:04.510 [2024-06-07 14:19:28.129893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.770 passed 00:17:04.770 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-07 14:19:28.224015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:04.770 [2024-06-07 14:19:28.315201] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:04.770 [2024-06-07 14:19:28.323202] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:04.770 [2024-06-07 14:19:28.331199] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:04.770 [2024-06-07 14:19:28.339200] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:04.770 [2024-06-07 14:19:28.368290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:04.770 passed 00:17:05.031 Test: admin_create_io_sq_verify_pc ...[2024-06-07 14:19:28.459889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:05.031 [2024-06-07 14:19:28.476206] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:05.031 [2024-06-07 14:19:28.494049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:05.031 passed 00:17:05.031 Test: admin_create_io_qp_max_qps ...[2024-06-07 14:19:28.587568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.415 [2024-06-07 14:19:29.699206] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:06.674 [2024-06-07 14:19:30.073813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.674 passed 00:17:06.674 Test: admin_create_io_sq_shared_cq ...[2024-06-07 14:19:30.166023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:06.674 [2024-06-07 14:19:30.297209] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:06.934 [2024-06-07 14:19:30.334285] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:06.935 passed 00:17:06.935 00:17:06.935 Run Summary: Type Total Ran Passed Failed Inactive 00:17:06.935 suites 1 1 n/a 0 0 00:17:06.935 tests 18 18 18 0 0 00:17:06.935 asserts 360 360 360 0 n/a 00:17:06.935 00:17:06.935 Elapsed time = 1.650 seconds 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 471538 ']' 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 471538' 00:17:06.935 killing process with pid 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 471538 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:06.935 00:17:06.935 real 0m6.394s 00:17:06.935 user 0m18.336s 00:17:06.935 sys 0m0.499s 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:06.935 14:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:06.935 ************************************ 00:17:06.935 END TEST nvmf_vfio_user_nvme_compliance 00:17:06.935 ************************************ 00:17:07.196 14:19:30 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:07.196 14:19:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:07.196 14:19:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:07.196 14:19:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.196 ************************************ 00:17:07.196 START TEST nvmf_vfio_user_fuzz 00:17:07.196 ************************************ 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:07.196 * Looking for test storage... 00:17:07.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.196 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=472727 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 472727' 00:17:07.197 Process pid: 472727 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 472727 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 472727 ']' 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:07.197 14:19:30 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 14:19:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:08.139 14:19:31 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:17:08.139 14:19:31 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.080 malloc0 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:09.080 14:19:32 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:41.211 Fuzzing completed. Shutting down the fuzz application 00:17:41.211 00:17:41.211 Dumping successful admin opcodes: 00:17:41.211 8, 9, 10, 24, 00:17:41.211 Dumping successful io opcodes: 00:17:41.211 0, 00:17:41.211 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1105248, total successful commands: 4351, random_seed: 3978656320 00:17:41.211 NS: 0x200003a1ef00 admin qp, Total commands completed: 139030, total successful commands: 1127, random_seed: 1549382208 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 472727 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 472727 ']' 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 472727 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:41.211 14:20:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 472727 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 472727' 00:17:41.211 killing process with pid 472727 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 472727 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 472727 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:41.211 00:17:41.211 real 0m33.610s 00:17:41.211 user 0m37.855s 00:17:41.211 sys 0m25.277s 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:41.211 14:20:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 ************************************ 00:17:41.211 END TEST nvmf_vfio_user_fuzz 00:17:41.211 ************************************ 00:17:41.211 14:20:04 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:41.211 14:20:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:41.211 14:20:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:41.211 14:20:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.211 ************************************ 00:17:41.211 START TEST nvmf_host_management 00:17:41.211 ************************************ 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:41.211 * Looking for test storage... 00:17:41.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.211 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.212 14:20:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:49.345 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:49.345 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:49.345 Found net devices under 0000:31:00.0: cvl_0_0 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:49.345 Found net devices under 0000:31:00.1: cvl_0_1 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:49.345 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:49.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:17:49.346 00:17:49.346 --- 10.0.0.2 ping statistics --- 00:17:49.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.346 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:17:49.346 00:17:49.346 --- 10.0.0.1 ping statistics --- 00:17:49.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.346 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=483593 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 483593 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 483593 ']' 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:49.346 14:20:12 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:49.346 [2024-06-07 14:20:12.760466] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:17:49.346 [2024-06-07 14:20:12.760515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.346 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.346 [2024-06-07 14:20:12.851276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.346 [2024-06-07 14:20:12.889347] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.346 [2024-06-07 14:20:12.889396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.346 [2024-06-07 14:20:12.889405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.346 [2024-06-07 14:20:12.889412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.346 [2024-06-07 14:20:12.889418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.346 [2024-06-07 14:20:12.889543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.346 [2024-06-07 14:20:12.889705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.346 [2024-06-07 14:20:12.889865] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.346 [2024-06-07 14:20:12.889866] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.919 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:49.919 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:17:49.919 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.919 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:49.919 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 [2024-06-07 14:20:13.577849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 Malloc0 00:17:50.180 [2024-06-07 14:20:13.641216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=483664 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 483664 /var/tmp/bdevperf.sock 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 483664 ']' 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.180 { 00:17:50.180 "params": { 00:17:50.180 "name": "Nvme$subsystem", 00:17:50.180 "trtype": "$TEST_TRANSPORT", 00:17:50.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.180 "adrfam": "ipv4", 00:17:50.180 "trsvcid": "$NVMF_PORT", 00:17:50.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.180 "hdgst": ${hdgst:-false}, 00:17:50.180 "ddgst": ${ddgst:-false} 00:17:50.180 }, 00:17:50.180 "method": "bdev_nvme_attach_controller" 00:17:50.180 } 00:17:50.180 EOF 00:17:50.180 )") 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:50.180 14:20:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.180 "params": { 00:17:50.180 "name": "Nvme0", 00:17:50.180 "trtype": "tcp", 00:17:50.180 "traddr": "10.0.0.2", 00:17:50.180 "adrfam": "ipv4", 00:17:50.180 "trsvcid": "4420", 00:17:50.180 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.180 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:50.180 "hdgst": false, 00:17:50.180 "ddgst": false 00:17:50.181 }, 00:17:50.181 "method": "bdev_nvme_attach_controller" 00:17:50.181 }' 00:17:50.181 [2024-06-07 14:20:13.741399] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:17:50.181 [2024-06-07 14:20:13.741452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483664 ] 00:17:50.181 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.181 [2024-06-07 14:20:13.806874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.440 [2024-06-07 14:20:13.838785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.440 Running I/O for 10 seconds... 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.014 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 [2024-06-07 14:20:14.584226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584321] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584400] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584406] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584432] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584458] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584465] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584471] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584483] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584509] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584515] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584521] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584547] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584563] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584583] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584595] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584607] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.584626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2027060 is same with the state(5) to be set 00:17:51.014 [2024-06-07 14:20:14.586456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.014 [2024-06-07 14:20:14.586495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.014 [2024-06-07 14:20:14.586511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.014 [2024-06-07 14:20:14.586519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.014 [2024-06-07 14:20:14.586528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.586992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.586998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.587015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.587033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.587049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.587066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.015 [2024-06-07 14:20:14.587083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.015 [2024-06-07 14:20:14.587093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.016 [2024-06-07 14:20:14.587559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.587615] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x556210 was disconnected and freed. reset controller. 00:17:51.016 [2024-06-07 14:20:14.588829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:51.016 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.016 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:51.016 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.016 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:51.016 task offset: 108032 on job bdev=Nvme0n1 fails 00:17:51.016 00:17:51.016 Latency(us) 00:17:51.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.016 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:51.016 Job: Nvme0n1 ended in about 0.57 seconds with error 00:17:51.016 Verification LBA range: start 0x0 length 0x400 00:17:51.016 Nvme0n1 : 0.57 1475.39 92.21 111.88 0.00 39341.96 1515.52 34515.63 00:17:51.016 =================================================================================================================== 00:17:51.016 Total : 1475.39 92.21 111.88 0.00 39341.96 1515.52 34515.63 00:17:51.016 [2024-06-07 14:20:14.590836] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:51.016 [2024-06-07 14:20:14.590860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55bee0 (9): Bad file descriptor 00:17:51.016 [2024-06-07 14:20:14.593966] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:51.016 [2024-06-07 14:20:14.594087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.016 [2024-06-07 14:20:14.594116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.016 [2024-06-07 14:20:14.594133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:51.016 [2024-06-07 14:20:14.594140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:51.016 [2024-06-07 14:20:14.594148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:51.016 [2024-06-07 14:20:14.594155] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x55bee0 00:17:51.016 [2024-06-07 14:20:14.594175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x55bee0 (9): Bad file descriptor 00:17:51.016 [2024-06-07 14:20:14.594188] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:51.017 [2024-06-07 14:20:14.594199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:51.017 [2024-06-07 14:20:14.594208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:51.017 [2024-06-07 14:20:14.594221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:51.017 14:20:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.017 14:20:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:51.959 14:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 483664 00:17:51.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (483664) - No such process 00:17:51.959 14:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:52.219 { 00:17:52.219 "params": { 00:17:52.219 "name": "Nvme$subsystem", 00:17:52.219 "trtype": "$TEST_TRANSPORT", 00:17:52.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:52.219 "adrfam": "ipv4", 00:17:52.219 "trsvcid": "$NVMF_PORT", 00:17:52.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:52.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:52.219 "hdgst": ${hdgst:-false}, 00:17:52.219 "ddgst": ${ddgst:-false} 00:17:52.219 }, 00:17:52.219 "method": "bdev_nvme_attach_controller" 00:17:52.219 } 00:17:52.219 EOF 00:17:52.219 )") 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:52.219 14:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:52.219 "params": { 00:17:52.219 "name": "Nvme0", 00:17:52.219 "trtype": "tcp", 00:17:52.219 "traddr": "10.0.0.2", 00:17:52.220 "adrfam": "ipv4", 00:17:52.220 "trsvcid": "4420", 00:17:52.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:52.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:52.220 "hdgst": false, 00:17:52.220 "ddgst": false 00:17:52.220 }, 00:17:52.220 "method": "bdev_nvme_attach_controller" 00:17:52.220 }' 00:17:52.220 [2024-06-07 14:20:15.657545] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:17:52.220 [2024-06-07 14:20:15.657605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484081 ] 00:17:52.220 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.220 [2024-06-07 14:20:15.721244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.220 [2024-06-07 14:20:15.752002] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.480 Running I/O for 1 seconds... 00:17:53.424 00:17:53.424 Latency(us) 00:17:53.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:53.424 Verification LBA range: start 0x0 length 0x400 00:17:53.424 Nvme0n1 : 1.03 1677.57 104.85 0.00 0.00 37430.08 3822.93 31675.73 00:17:53.424 =================================================================================================================== 00:17:53.424 Total : 1677.57 104.85 0.00 0.00 37430.08 3822.93 31675.73 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.424 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.424 rmmod nvme_tcp 00:17:53.685 rmmod nvme_fabrics 00:17:53.685 rmmod nvme_keyring 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 483593 ']' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 483593 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 483593 ']' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 483593 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 483593 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 483593' 00:17:53.685 killing process with pid 483593 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 483593 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 483593 00:17:53.685 [2024-06-07 14:20:17.264793] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.685 14:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.232 14:20:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:56.232 14:20:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:56.232 00:17:56.232 real 0m15.032s 00:17:56.232 user 0m22.212s 00:17:56.232 sys 0m7.079s 00:17:56.232 14:20:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:56.232 14:20:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 ************************************ 00:17:56.232 END TEST nvmf_host_management 00:17:56.232 ************************************ 00:17:56.232 14:20:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:56.232 14:20:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:56.232 14:20:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:56.232 14:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 ************************************ 00:17:56.232 START TEST nvmf_lvol 00:17:56.232 ************************************ 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:56.232 * Looking for test storage... 00:17:56.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:56.232 14:20:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:04.378 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:04.378 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:04.378 Found net devices under 0000:31:00.0: cvl_0_0 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.378 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:04.379 Found net devices under 0000:31:00.1: cvl_0_1 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:04.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:18:04.379 00:18:04.379 --- 10.0.0.2 ping statistics --- 00:18:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.379 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:18:04.379 00:18:04.379 --- 10.0.0.1 ping statistics --- 00:18:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.379 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=489028 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 489028 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 489028 ']' 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:04.379 14:20:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:04.379 [2024-06-07 14:20:27.830448] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:18:04.379 [2024-06-07 14:20:27.830510] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.379 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.379 [2024-06-07 14:20:27.910972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:04.379 [2024-06-07 14:20:27.949904] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.379 [2024-06-07 14:20:27.949949] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.379 [2024-06-07 14:20:27.949957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.379 [2024-06-07 14:20:27.949964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.379 [2024-06-07 14:20:27.949970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.379 [2024-06-07 14:20:27.950122] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.379 [2024-06-07 14:20:27.950261] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.379 [2024-06-07 14:20:27.950265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:05.323 [2024-06-07 14:20:28.786244] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.323 14:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.584 14:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:05.584 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:05.584 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:05.584 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:05.846 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:06.108 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b0c8415c-8f30-4aba-89a5-ab2a5ade932e 00:18:06.108 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0c8415c-8f30-4aba-89a5-ab2a5ade932e lvol 20 00:18:06.108 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=61c553d3-46d1-40aa-9d31-78d28d15d5b4 00:18:06.108 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:06.369 14:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61c553d3-46d1-40aa-9d31-78d28d15d5b4 00:18:06.369 14:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:06.665 [2024-06-07 14:20:30.143203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.665 14:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.926 14:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=489708 00:18:06.926 14:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:06.926 14:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:06.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.868 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 61c553d3-46d1-40aa-9d31-78d28d15d5b4 MY_SNAPSHOT 00:18:08.130 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7ca5f2c3-b132-423a-82e9-d62d3d45a6ef 00:18:08.130 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 61c553d3-46d1-40aa-9d31-78d28d15d5b4 30 00:18:08.130 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7ca5f2c3-b132-423a-82e9-d62d3d45a6ef MY_CLONE 00:18:08.390 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=dd51638b-3dfb-4d5c-9b93-373229839665 00:18:08.390 14:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate dd51638b-3dfb-4d5c-9b93-373229839665 00:18:08.961 14:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 489708 00:18:17.099 Initializing NVMe Controllers 00:18:17.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:17.099 Controller IO queue size 128, less than required. 00:18:17.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:17.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:17.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:17.099 Initialization complete. Launching workers. 00:18:17.099 ======================================================== 00:18:17.099 Latency(us) 00:18:17.099 Device Information : IOPS MiB/s Average min max 00:18:17.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12564.50 49.08 10191.20 1475.07 54425.41 00:18:17.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17878.70 69.84 7160.33 465.71 42725.11 00:18:17.099 ======================================================== 00:18:17.099 Total : 30443.20 118.92 8411.23 465.71 54425.41 00:18:17.099 00:18:17.099 14:20:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:17.384 14:20:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61c553d3-46d1-40aa-9d31-78d28d15d5b4 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0c8415c-8f30-4aba-89a5-ab2a5ade932e 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.647 rmmod nvme_tcp 00:18:17.647 rmmod nvme_fabrics 00:18:17.647 rmmod nvme_keyring 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 489028 ']' 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 489028 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 489028 ']' 00:18:17.647 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 489028 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 489028 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 489028' 00:18:17.908 killing process with pid 489028 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 489028 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 489028 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.908 14:20:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.458 00:18:20.458 real 0m24.125s 00:18:20.458 user 1m4.034s 00:18:20.458 sys 0m8.377s 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:18:20.458 ************************************ 00:18:20.458 END TEST nvmf_lvol 00:18:20.458 ************************************ 00:18:20.458 14:20:43 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:20.458 14:20:43 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:20.458 14:20:43 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:20.458 14:20:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.458 ************************************ 00:18:20.458 START TEST nvmf_lvs_grow 00:18:20.458 ************************************ 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:20.458 * Looking for test storage... 00:18:20.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.458 14:20:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.459 14:20:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.594 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.594 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.594 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.595 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:18:28.595 00:18:28.595 --- 10.0.0.2 ping statistics --- 00:18:28.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.595 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.519 ms 00:18:28.595 00:18:28.595 --- 10.0.0.1 ping statistics --- 00:18:28.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.595 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=496402 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 496402 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 496402 ']' 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 [2024-06-07 14:20:51.773703] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:18:28.595 [2024-06-07 14:20:51.773741] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.595 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.595 [2024-06-07 14:20:51.834944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.595 [2024-06-07 14:20:51.865518] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.595 [2024-06-07 14:20:51.865553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.595 [2024-06-07 14:20:51.865560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.595 [2024-06-07 14:20:51.865566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.595 [2024-06-07 14:20:51.865572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.595 [2024-06-07 14:20:51.865592] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.595 14:20:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:28.595 [2024-06-07 14:20:52.116202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 ************************************ 00:18:28.595 START TEST lvs_grow_clean 00:18:28.595 ************************************ 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:28.595 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:28.856 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:28.856 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:29.116 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3566c79-258c-4ce1-a924-33baa6af91e5 lvol 150 00:18:29.376 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c3680f0-3233-4ade-843d-463994b0d697 00:18:29.376 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:29.376 14:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:29.376 [2024-06-07 14:20:52.996224] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:29.376 [2024-06-07 14:20:52.996273] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:29.376 true 00:18:29.376 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:29.376 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:29.636 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:29.636 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:29.895 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c3680f0-3233-4ade-843d-463994b0d697 00:18:29.895 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:30.156 [2024-06-07 14:20:53.594010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=496788 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 496788 /var/tmp/bdevperf.sock 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 496788 ']' 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:30.156 14:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:30.416 [2024-06-07 14:20:53.818555] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:18:30.416 [2024-06-07 14:20:53.818622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496788 ] 00:18:30.416 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.416 [2024-06-07 14:20:53.902144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.416 [2024-06-07 14:20:53.933745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.986 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:30.986 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:18:30.986 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:31.246 Nvme0n1 00:18:31.246 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:31.507 [ 00:18:31.507 { 00:18:31.507 "name": "Nvme0n1", 00:18:31.507 "aliases": [ 00:18:31.507 "9c3680f0-3233-4ade-843d-463994b0d697" 00:18:31.507 ], 00:18:31.507 "product_name": "NVMe disk", 00:18:31.507 "block_size": 4096, 00:18:31.507 "num_blocks": 38912, 00:18:31.507 "uuid": "9c3680f0-3233-4ade-843d-463994b0d697", 00:18:31.507 "assigned_rate_limits": { 00:18:31.507 "rw_ios_per_sec": 0, 00:18:31.507 "rw_mbytes_per_sec": 0, 00:18:31.507 "r_mbytes_per_sec": 0, 00:18:31.507 "w_mbytes_per_sec": 0 00:18:31.507 }, 00:18:31.507 "claimed": false, 00:18:31.507 "zoned": false, 00:18:31.507 "supported_io_types": { 00:18:31.507 "read": true, 00:18:31.507 "write": true, 00:18:31.507 "unmap": true, 00:18:31.507 "write_zeroes": true, 00:18:31.507 "flush": true, 00:18:31.507 "reset": true, 00:18:31.507 "compare": true, 00:18:31.507 "compare_and_write": true, 00:18:31.507 "abort": true, 00:18:31.507 "nvme_admin": true, 00:18:31.507 "nvme_io": true 00:18:31.507 }, 00:18:31.507 "memory_domains": [ 00:18:31.507 { 00:18:31.507 "dma_device_id": "system", 00:18:31.507 "dma_device_type": 1 00:18:31.507 } 00:18:31.507 ], 00:18:31.507 "driver_specific": { 00:18:31.507 "nvme": [ 00:18:31.507 { 00:18:31.507 "trid": { 00:18:31.507 "trtype": "TCP", 00:18:31.507 "adrfam": "IPv4", 00:18:31.507 "traddr": "10.0.0.2", 00:18:31.507 "trsvcid": "4420", 00:18:31.507 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:31.507 }, 00:18:31.507 "ctrlr_data": { 00:18:31.507 "cntlid": 1, 00:18:31.507 "vendor_id": "0x8086", 00:18:31.507 "model_number": "SPDK bdev Controller", 00:18:31.507 "serial_number": "SPDK0", 00:18:31.507 "firmware_revision": "24.09", 00:18:31.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:31.507 "oacs": { 00:18:31.507 "security": 0, 00:18:31.507 "format": 0, 00:18:31.507 "firmware": 0, 00:18:31.507 "ns_manage": 0 00:18:31.507 }, 00:18:31.507 "multi_ctrlr": true, 00:18:31.507 "ana_reporting": false 00:18:31.507 }, 00:18:31.507 "vs": { 00:18:31.507 "nvme_version": "1.3" 00:18:31.507 }, 00:18:31.507 "ns_data": { 00:18:31.507 "id": 1, 00:18:31.507 "can_share": true 00:18:31.507 } 00:18:31.507 } 00:18:31.507 ], 00:18:31.507 "mp_policy": "active_passive" 00:18:31.507 } 00:18:31.507 } 00:18:31.507 ] 00:18:31.507 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.507 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=497117 00:18:31.507 14:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:31.507 Running I/O for 10 seconds... 00:18:32.486 Latency(us) 00:18:32.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.486 Nvme0n1 : 1.00 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:18:32.486 =================================================================================================================== 00:18:32.486 Total : 18045.00 70.49 0.00 0.00 0.00 0.00 0.00 00:18:32.486 00:18:33.427 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:33.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.427 Nvme0n1 : 2.00 18232.00 71.22 0.00 0.00 0.00 0.00 0.00 00:18:33.427 =================================================================================================================== 00:18:33.427 Total : 18232.00 71.22 0.00 0.00 0.00 0.00 0.00 00:18:33.427 00:18:33.686 true 00:18:33.686 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:33.686 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:33.686 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:33.686 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:33.686 14:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 497117 00:18:34.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.623 Nvme0n1 : 3.00 18275.00 71.39 0.00 0.00 0.00 0.00 0.00 00:18:34.623 =================================================================================================================== 00:18:34.623 Total : 18275.00 71.39 0.00 0.00 0.00 0.00 0.00 00:18:34.623 00:18:35.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.563 Nvme0n1 : 4.00 18307.25 71.51 0.00 0.00 0.00 0.00 0.00 00:18:35.563 =================================================================================================================== 00:18:35.563 Total : 18307.25 71.51 0.00 0.00 0.00 0.00 0.00 00:18:35.563 00:18:36.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.504 Nvme0n1 : 5.00 18348.40 71.67 0.00 0.00 0.00 0.00 0.00 00:18:36.504 =================================================================================================================== 00:18:36.504 Total : 18348.40 71.67 0.00 0.00 0.00 0.00 0.00 00:18:36.504 00:18:37.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.444 Nvme0n1 : 6.00 18365.17 71.74 0.00 0.00 0.00 0.00 0.00 00:18:37.444 =================================================================================================================== 00:18:37.444 Total : 18365.17 71.74 0.00 0.00 0.00 0.00 0.00 00:18:37.444 00:18:38.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.829 Nvme0n1 : 7.00 18383.29 71.81 0.00 0.00 0.00 0.00 0.00 00:18:38.829 =================================================================================================================== 00:18:38.829 Total : 18383.29 71.81 0.00 0.00 0.00 0.00 0.00 00:18:38.829 00:18:39.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.769 Nvme0n1 : 8.00 18398.25 71.87 0.00 0.00 0.00 0.00 0.00 00:18:39.769 =================================================================================================================== 00:18:39.769 Total : 18398.25 71.87 0.00 0.00 0.00 0.00 0.00 00:18:39.769 00:18:40.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.710 Nvme0n1 : 9.00 18415.00 71.93 0.00 0.00 0.00 0.00 0.00 00:18:40.710 =================================================================================================================== 00:18:40.710 Total : 18415.00 71.93 0.00 0.00 0.00 0.00 0.00 00:18:40.710 00:18:41.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.651 Nvme0n1 : 10.00 18424.20 71.97 0.00 0.00 0.00 0.00 0.00 00:18:41.651 =================================================================================================================== 00:18:41.651 Total : 18424.20 71.97 0.00 0.00 0.00 0.00 0.00 00:18:41.651 00:18:41.651 00:18:41.651 Latency(us) 00:18:41.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.651 Nvme0n1 : 10.01 18426.96 71.98 0.00 0.00 6942.20 4205.23 14417.92 00:18:41.651 =================================================================================================================== 00:18:41.651 Total : 18426.96 71.98 0.00 0.00 6942.20 4205.23 14417.92 00:18:41.651 0 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 496788 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 496788 ']' 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 496788 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 496788 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 496788' 00:18:41.651 killing process with pid 496788 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 496788 00:18:41.651 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.651 00:18:41.651 Latency(us) 00:18:41.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.651 =================================================================================================================== 00:18:41.651 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 496788 00:18:41.651 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:41.912 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:42.172 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:42.172 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:42.172 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:42.172 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:42.172 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:42.432 [2024-06-07 14:21:05.934383] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:42.432 14:21:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:42.693 request: 00:18:42.693 { 00:18:42.693 "uuid": "c3566c79-258c-4ce1-a924-33baa6af91e5", 00:18:42.693 "method": "bdev_lvol_get_lvstores", 00:18:42.693 "req_id": 1 00:18:42.693 } 00:18:42.693 Got JSON-RPC error response 00:18:42.693 response: 00:18:42.693 { 00:18:42.693 "code": -19, 00:18:42.693 "message": "No such device" 00:18:42.693 } 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:42.693 aio_bdev 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c3680f0-3233-4ade-843d-463994b0d697 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=9c3680f0-3233-4ade-843d-463994b0d697 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:42.693 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:42.953 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c3680f0-3233-4ade-843d-463994b0d697 -t 2000 00:18:42.953 [ 00:18:42.953 { 00:18:42.953 "name": "9c3680f0-3233-4ade-843d-463994b0d697", 00:18:42.953 "aliases": [ 00:18:42.953 "lvs/lvol" 00:18:42.953 ], 00:18:42.953 "product_name": "Logical Volume", 00:18:42.953 "block_size": 4096, 00:18:42.953 "num_blocks": 38912, 00:18:42.953 "uuid": "9c3680f0-3233-4ade-843d-463994b0d697", 00:18:42.953 "assigned_rate_limits": { 00:18:42.953 "rw_ios_per_sec": 0, 00:18:42.953 "rw_mbytes_per_sec": 0, 00:18:42.953 "r_mbytes_per_sec": 0, 00:18:42.953 "w_mbytes_per_sec": 0 00:18:42.953 }, 00:18:42.953 "claimed": false, 00:18:42.953 "zoned": false, 00:18:42.953 "supported_io_types": { 00:18:42.953 "read": true, 00:18:42.953 "write": true, 00:18:42.953 "unmap": true, 00:18:42.953 "write_zeroes": true, 00:18:42.953 "flush": false, 00:18:42.953 "reset": true, 00:18:42.953 "compare": false, 00:18:42.953 "compare_and_write": false, 00:18:42.954 "abort": false, 00:18:42.954 "nvme_admin": false, 00:18:42.954 "nvme_io": false 00:18:42.954 }, 00:18:42.954 "driver_specific": { 00:18:42.954 "lvol": { 00:18:42.954 "lvol_store_uuid": "c3566c79-258c-4ce1-a924-33baa6af91e5", 00:18:42.954 "base_bdev": "aio_bdev", 00:18:42.954 "thin_provision": false, 00:18:42.954 "num_allocated_clusters": 38, 00:18:42.954 "snapshot": false, 00:18:42.954 "clone": false, 00:18:42.954 "esnap_clone": false 00:18:42.954 } 00:18:42.954 } 00:18:42.954 } 00:18:42.954 ] 00:18:42.954 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:18:42.954 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:42.954 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:43.214 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:43.214 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:43.214 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:43.474 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:43.474 14:21:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c3680f0-3233-4ade-843d-463994b0d697 00:18:43.474 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3566c79-258c-4ce1-a924-33baa6af91e5 00:18:43.733 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:43.733 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.733 00:18:43.733 real 0m15.197s 00:18:43.733 user 0m14.940s 00:18:43.733 sys 0m1.217s 00:18:43.733 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:43.733 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.733 ************************************ 00:18:43.733 END TEST lvs_grow_clean 00:18:43.733 ************************************ 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:43.993 ************************************ 00:18:43.993 START TEST lvs_grow_dirty 00:18:43.993 ************************************ 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:43.993 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:44.252 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:44.252 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:44.252 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:44.252 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:44.252 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:44.513 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:44.513 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:44.513 14:21:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aef8a8e2-b130-4b25-a985-700bcfefcee9 lvol 150 00:18:44.513 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:44.513 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:44.513 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:44.774 [2024-06-07 14:21:08.245724] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:44.774 [2024-06-07 14:21:08.245776] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:44.774 true 00:18:44.774 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:44.774 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:44.774 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:44.774 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:45.034 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:45.294 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.295 [2024-06-07 14:21:08.839522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.295 14:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=500159 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 500159 /var/tmp/bdevperf.sock 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 500159 ']' 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:45.556 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:45.556 [2024-06-07 14:21:09.054761] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:18:45.556 [2024-06-07 14:21:09.054814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500159 ] 00:18:45.556 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.556 [2024-06-07 14:21:09.134456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.556 [2024-06-07 14:21:09.162854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.500 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:46.500 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:18:46.500 14:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:46.761 Nvme0n1 00:18:46.761 14:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:46.761 [ 00:18:46.761 { 00:18:46.761 "name": "Nvme0n1", 00:18:46.761 "aliases": [ 00:18:46.761 "822e89d4-e00a-450d-aa51-6649c237e7fb" 00:18:46.761 ], 00:18:46.761 "product_name": "NVMe disk", 00:18:46.761 "block_size": 4096, 00:18:46.761 "num_blocks": 38912, 00:18:46.761 "uuid": "822e89d4-e00a-450d-aa51-6649c237e7fb", 00:18:46.761 "assigned_rate_limits": { 00:18:46.761 "rw_ios_per_sec": 0, 00:18:46.761 "rw_mbytes_per_sec": 0, 00:18:46.761 "r_mbytes_per_sec": 0, 00:18:46.761 "w_mbytes_per_sec": 0 00:18:46.761 }, 00:18:46.761 "claimed": false, 00:18:46.761 "zoned": false, 00:18:46.761 "supported_io_types": { 00:18:46.761 "read": true, 00:18:46.761 "write": true, 00:18:46.761 "unmap": true, 00:18:46.761 "write_zeroes": true, 00:18:46.761 "flush": true, 00:18:46.761 "reset": true, 00:18:46.761 "compare": true, 00:18:46.761 "compare_and_write": true, 00:18:46.761 "abort": true, 00:18:46.761 "nvme_admin": true, 00:18:46.761 "nvme_io": true 00:18:46.761 }, 00:18:46.761 "memory_domains": [ 00:18:46.761 { 00:18:46.761 "dma_device_id": "system", 00:18:46.761 "dma_device_type": 1 00:18:46.761 } 00:18:46.761 ], 00:18:46.761 "driver_specific": { 00:18:46.761 "nvme": [ 00:18:46.762 { 00:18:46.762 "trid": { 00:18:46.762 "trtype": "TCP", 00:18:46.762 "adrfam": "IPv4", 00:18:46.762 "traddr": "10.0.0.2", 00:18:46.762 "trsvcid": "4420", 00:18:46.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:46.762 }, 00:18:46.762 "ctrlr_data": { 00:18:46.762 "cntlid": 1, 00:18:46.762 "vendor_id": "0x8086", 00:18:46.762 "model_number": "SPDK bdev Controller", 00:18:46.762 "serial_number": "SPDK0", 00:18:46.762 "firmware_revision": "24.09", 00:18:46.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:46.762 "oacs": { 00:18:46.762 "security": 0, 00:18:46.762 "format": 0, 00:18:46.762 "firmware": 0, 00:18:46.762 "ns_manage": 0 00:18:46.762 }, 00:18:46.762 "multi_ctrlr": true, 00:18:46.762 "ana_reporting": false 00:18:46.762 }, 00:18:46.762 "vs": { 00:18:46.762 "nvme_version": "1.3" 00:18:46.762 }, 00:18:46.762 "ns_data": { 00:18:46.762 "id": 1, 00:18:46.762 "can_share": true 00:18:46.762 } 00:18:46.762 } 00:18:46.762 ], 00:18:46.762 "mp_policy": "active_passive" 00:18:46.762 } 00:18:46.762 } 00:18:46.762 ] 00:18:46.762 14:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=500692 00:18:46.762 14:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:46.762 14:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.023 Running I/O for 10 seconds... 00:18:47.965 Latency(us) 00:18:47.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:47.965 Nvme0n1 : 1.00 18201.00 71.10 0.00 0.00 0.00 0.00 0.00 00:18:47.965 =================================================================================================================== 00:18:47.965 Total : 18201.00 71.10 0.00 0.00 0.00 0.00 0.00 00:18:47.965 00:18:48.940 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:48.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.940 Nvme0n1 : 2.00 18294.00 71.46 0.00 0.00 0.00 0.00 0.00 00:18:48.940 =================================================================================================================== 00:18:48.940 Total : 18294.00 71.46 0.00 0.00 0.00 0.00 0.00 00:18:48.940 00:18:48.940 true 00:18:48.940 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:48.940 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:49.200 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:49.200 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:49.200 14:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 500692 00:18:50.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.143 Nvme0n1 : 3.00 18317.33 71.55 0.00 0.00 0.00 0.00 0.00 00:18:50.143 =================================================================================================================== 00:18:50.143 Total : 18317.33 71.55 0.00 0.00 0.00 0.00 0.00 00:18:50.143 00:18:51.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.082 Nvme0n1 : 4.00 18359.00 71.71 0.00 0.00 0.00 0.00 0.00 00:18:51.082 =================================================================================================================== 00:18:51.082 Total : 18359.00 71.71 0.00 0.00 0.00 0.00 0.00 00:18:51.082 00:18:52.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.023 Nvme0n1 : 5.00 18379.00 71.79 0.00 0.00 0.00 0.00 0.00 00:18:52.023 =================================================================================================================== 00:18:52.023 Total : 18379.00 71.79 0.00 0.00 0.00 0.00 0.00 00:18:52.023 00:18:52.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.966 Nvme0n1 : 6.00 18393.83 71.85 0.00 0.00 0.00 0.00 0.00 00:18:52.966 =================================================================================================================== 00:18:52.966 Total : 18393.83 71.85 0.00 0.00 0.00 0.00 0.00 00:18:52.966 00:18:53.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.909 Nvme0n1 : 7.00 18401.43 71.88 0.00 0.00 0.00 0.00 0.00 00:18:53.909 =================================================================================================================== 00:18:53.909 Total : 18401.43 71.88 0.00 0.00 0.00 0.00 0.00 00:18:53.909 00:18:54.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:54.853 Nvme0n1 : 8.00 18414.62 71.93 0.00 0.00 0.00 0.00 0.00 00:18:54.853 =================================================================================================================== 00:18:54.853 Total : 18414.62 71.93 0.00 0.00 0.00 0.00 0.00 00:18:54.853 00:18:55.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.795 Nvme0n1 : 9.00 18426.22 71.98 0.00 0.00 0.00 0.00 0.00 00:18:55.795 =================================================================================================================== 00:18:55.795 Total : 18426.22 71.98 0.00 0.00 0.00 0.00 0.00 00:18:55.795 00:18:57.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.183 Nvme0n1 : 10.00 18434.00 72.01 0.00 0.00 0.00 0.00 0.00 00:18:57.183 =================================================================================================================== 00:18:57.183 Total : 18434.00 72.01 0.00 0.00 0.00 0.00 0.00 00:18:57.183 00:18:57.183 00:18:57.183 Latency(us) 00:18:57.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.183 Nvme0n1 : 10.01 18435.53 72.01 0.00 0.00 6940.29 3713.71 12779.52 00:18:57.183 =================================================================================================================== 00:18:57.183 Total : 18435.53 72.01 0.00 0.00 6940.29 3713.71 12779.52 00:18:57.183 0 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 500159 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 500159 ']' 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 500159 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 500159 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 500159' 00:18:57.183 killing process with pid 500159 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 500159 00:18:57.183 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.183 00:18:57.183 Latency(us) 00:18:57.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.183 =================================================================================================================== 00:18:57.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 500159 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:57.183 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:57.444 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:57.444 14:21:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:57.704 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:57.704 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:57.704 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 496402 00:18:57.704 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 496402 00:18:57.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 496402 Killed "${NVMF_APP[@]}" "$@" 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=502770 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 502770 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 502770 ']' 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:57.705 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:57.705 [2024-06-07 14:21:21.201541] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:18:57.705 [2024-06-07 14:21:21.201594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.705 EAL: No free 2048 kB hugepages reported on node 1 00:18:57.705 [2024-06-07 14:21:21.274543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.705 [2024-06-07 14:21:21.306250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.705 [2024-06-07 14:21:21.306284] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.705 [2024-06-07 14:21:21.306292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.705 [2024-06-07 14:21:21.306298] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.705 [2024-06-07 14:21:21.306304] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.705 [2024-06-07 14:21:21.306322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.646 14:21:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:58.646 [2024-06-07 14:21:22.136548] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:58.646 [2024-06-07 14:21:22.136640] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:58.646 [2024-06-07 14:21:22.136671] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:58.646 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:58.906 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 822e89d4-e00a-450d-aa51-6649c237e7fb -t 2000 00:18:58.906 [ 00:18:58.906 { 00:18:58.906 "name": "822e89d4-e00a-450d-aa51-6649c237e7fb", 00:18:58.906 "aliases": [ 00:18:58.906 "lvs/lvol" 00:18:58.906 ], 00:18:58.906 "product_name": "Logical Volume", 00:18:58.906 "block_size": 4096, 00:18:58.906 "num_blocks": 38912, 00:18:58.906 "uuid": "822e89d4-e00a-450d-aa51-6649c237e7fb", 00:18:58.906 "assigned_rate_limits": { 00:18:58.906 "rw_ios_per_sec": 0, 00:18:58.906 "rw_mbytes_per_sec": 0, 00:18:58.906 "r_mbytes_per_sec": 0, 00:18:58.907 "w_mbytes_per_sec": 0 00:18:58.907 }, 00:18:58.907 "claimed": false, 00:18:58.907 "zoned": false, 00:18:58.907 "supported_io_types": { 00:18:58.907 "read": true, 00:18:58.907 "write": true, 00:18:58.907 "unmap": true, 00:18:58.907 "write_zeroes": true, 00:18:58.907 "flush": false, 00:18:58.907 "reset": true, 00:18:58.907 "compare": false, 00:18:58.907 "compare_and_write": false, 00:18:58.907 "abort": false, 00:18:58.907 "nvme_admin": false, 00:18:58.907 "nvme_io": false 00:18:58.907 }, 00:18:58.907 "driver_specific": { 00:18:58.907 "lvol": { 00:18:58.907 "lvol_store_uuid": "aef8a8e2-b130-4b25-a985-700bcfefcee9", 00:18:58.907 "base_bdev": "aio_bdev", 00:18:58.907 "thin_provision": false, 00:18:58.907 "num_allocated_clusters": 38, 00:18:58.907 "snapshot": false, 00:18:58.907 "clone": false, 00:18:58.907 "esnap_clone": false 00:18:58.907 } 00:18:58.907 } 00:18:58.907 } 00:18:58.907 ] 00:18:58.907 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:18:58.907 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:58.907 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:59.167 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:59.167 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:59.167 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:59.167 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:59.167 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:59.427 [2024-06-07 14:21:22.904495] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:59.427 14:21:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:59.687 request: 00:18:59.687 { 00:18:59.687 "uuid": "aef8a8e2-b130-4b25-a985-700bcfefcee9", 00:18:59.687 "method": "bdev_lvol_get_lvstores", 00:18:59.687 "req_id": 1 00:18:59.687 } 00:18:59.687 Got JSON-RPC error response 00:18:59.687 response: 00:18:59.687 { 00:18:59.687 "code": -19, 00:18:59.687 "message": "No such device" 00:18:59.687 } 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:59.687 aio_bdev 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=822e89d4-e00a-450d-aa51-6649c237e7fb 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:18:59.687 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:59.946 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 822e89d4-e00a-450d-aa51-6649c237e7fb -t 2000 00:18:59.946 [ 00:18:59.946 { 00:18:59.946 "name": "822e89d4-e00a-450d-aa51-6649c237e7fb", 00:18:59.946 "aliases": [ 00:18:59.946 "lvs/lvol" 00:18:59.946 ], 00:18:59.946 "product_name": "Logical Volume", 00:18:59.946 "block_size": 4096, 00:18:59.946 "num_blocks": 38912, 00:18:59.946 "uuid": "822e89d4-e00a-450d-aa51-6649c237e7fb", 00:18:59.946 "assigned_rate_limits": { 00:18:59.946 "rw_ios_per_sec": 0, 00:18:59.946 "rw_mbytes_per_sec": 0, 00:18:59.946 "r_mbytes_per_sec": 0, 00:18:59.946 "w_mbytes_per_sec": 0 00:18:59.946 }, 00:18:59.946 "claimed": false, 00:18:59.946 "zoned": false, 00:18:59.946 "supported_io_types": { 00:18:59.946 "read": true, 00:18:59.946 "write": true, 00:18:59.946 "unmap": true, 00:18:59.946 "write_zeroes": true, 00:18:59.946 "flush": false, 00:18:59.946 "reset": true, 00:18:59.946 "compare": false, 00:18:59.946 "compare_and_write": false, 00:18:59.946 "abort": false, 00:18:59.946 "nvme_admin": false, 00:18:59.946 "nvme_io": false 00:18:59.946 }, 00:18:59.946 "driver_specific": { 00:18:59.946 "lvol": { 00:18:59.946 "lvol_store_uuid": "aef8a8e2-b130-4b25-a985-700bcfefcee9", 00:18:59.946 "base_bdev": "aio_bdev", 00:18:59.946 "thin_provision": false, 00:18:59.946 "num_allocated_clusters": 38, 00:18:59.946 "snapshot": false, 00:18:59.946 "clone": false, 00:18:59.946 "esnap_clone": false 00:18:59.946 } 00:18:59.946 } 00:18:59.946 } 00:18:59.946 ] 00:18:59.946 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:18:59.946 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:18:59.946 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:19:00.205 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:19:00.205 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:19:00.205 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:19:00.205 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:19:00.205 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 822e89d4-e00a-450d-aa51-6649c237e7fb 00:19:00.466 14:21:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aef8a8e2-b130-4b25-a985-700bcfefcee9 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:00.726 00:19:00.726 real 0m16.875s 00:19:00.726 user 0m43.785s 00:19:00.726 sys 0m2.777s 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:19:00.726 ************************************ 00:19:00.726 END TEST lvs_grow_dirty 00:19:00.726 ************************************ 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:19:00.726 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:00.986 nvmf_trace.0 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.986 rmmod nvme_tcp 00:19:00.986 rmmod nvme_fabrics 00:19:00.986 rmmod nvme_keyring 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 502770 ']' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 502770 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 502770 ']' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 502770 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 502770 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 502770' 00:19:00.986 killing process with pid 502770 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 502770 00:19:00.986 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 502770 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.246 14:21:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.157 14:21:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:03.157 00:19:03.157 real 0m43.100s 00:19:03.157 user 1m4.753s 00:19:03.157 sys 0m10.223s 00:19:03.157 14:21:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:03.157 14:21:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:19:03.157 ************************************ 00:19:03.157 END TEST nvmf_lvs_grow 00:19:03.157 ************************************ 00:19:03.157 14:21:26 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.157 14:21:26 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:03.157 14:21:26 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:03.157 14:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:03.418 ************************************ 00:19:03.418 START TEST nvmf_bdev_io_wait 00:19:03.418 ************************************ 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:03.418 * Looking for test storage... 00:19:03.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.418 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.419 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:03.419 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:03.419 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.419 14:21:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:11.605 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:11.605 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:11.605 Found net devices under 0000:31:00.0: cvl_0_0 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:11.605 Found net devices under 0000:31:00.1: cvl_0_1 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.605 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:11.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:19:11.606 00:19:11.606 --- 10.0.0.2 ping statistics --- 00:19:11.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.606 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:19:11.606 00:19:11.606 --- 10.0.0.1 ping statistics --- 00:19:11.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.606 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=508176 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 508176 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 508176 ']' 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:11.606 14:21:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:11.606 [2024-06-07 14:21:35.019420] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:11.606 [2024-06-07 14:21:35.019482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.606 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.606 [2024-06-07 14:21:35.104300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.606 [2024-06-07 14:21:35.145105] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.606 [2024-06-07 14:21:35.145146] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.606 [2024-06-07 14:21:35.145154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.606 [2024-06-07 14:21:35.145160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.606 [2024-06-07 14:21:35.145166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.606 [2024-06-07 14:21:35.145247] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.606 [2024-06-07 14:21:35.145287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.606 [2024-06-07 14:21:35.145666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.606 [2024-06-07 14:21:35.145667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.178 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:12.178 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:19:12.178 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.178 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:12.178 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 [2024-06-07 14:21:35.913491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 Malloc0 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:12.440 [2024-06-07 14:21:35.988461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=508473 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=508476 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.440 { 00:19:12.440 "params": { 00:19:12.440 "name": "Nvme$subsystem", 00:19:12.440 "trtype": "$TEST_TRANSPORT", 00:19:12.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.440 "adrfam": "ipv4", 00:19:12.440 "trsvcid": "$NVMF_PORT", 00:19:12.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.440 "hdgst": ${hdgst:-false}, 00:19:12.440 "ddgst": ${ddgst:-false} 00:19:12.440 }, 00:19:12.440 "method": "bdev_nvme_attach_controller" 00:19:12.440 } 00:19:12.440 EOF 00:19:12.440 )") 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=508479 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.440 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.440 { 00:19:12.440 "params": { 00:19:12.440 "name": "Nvme$subsystem", 00:19:12.440 "trtype": "$TEST_TRANSPORT", 00:19:12.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.440 "adrfam": "ipv4", 00:19:12.440 "trsvcid": "$NVMF_PORT", 00:19:12.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.440 "hdgst": ${hdgst:-false}, 00:19:12.440 "ddgst": ${ddgst:-false} 00:19:12.440 }, 00:19:12.440 "method": "bdev_nvme_attach_controller" 00:19:12.440 } 00:19:12.440 EOF 00:19:12.441 )") 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=508482 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.441 { 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme$subsystem", 00:19:12.441 "trtype": "$TEST_TRANSPORT", 00:19:12.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "$NVMF_PORT", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.441 "hdgst": ${hdgst:-false}, 00:19:12.441 "ddgst": ${ddgst:-false} 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 } 00:19:12.441 EOF 00:19:12.441 )") 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:19:12.441 14:21:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:12.441 { 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme$subsystem", 00:19:12.441 "trtype": "$TEST_TRANSPORT", 00:19:12.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "$NVMF_PORT", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:12.441 "hdgst": ${hdgst:-false}, 00:19:12.441 "ddgst": ${ddgst:-false} 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 } 00:19:12.441 EOF 00:19:12.441 )") 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 508473 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme1", 00:19:12.441 "trtype": "tcp", 00:19:12.441 "traddr": "10.0.0.2", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "4420", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.441 "hdgst": false, 00:19:12.441 "ddgst": false 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 }' 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme1", 00:19:12.441 "trtype": "tcp", 00:19:12.441 "traddr": "10.0.0.2", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "4420", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.441 "hdgst": false, 00:19:12.441 "ddgst": false 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 }' 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme1", 00:19:12.441 "trtype": "tcp", 00:19:12.441 "traddr": "10.0.0.2", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "4420", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.441 "hdgst": false, 00:19:12.441 "ddgst": false 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 }' 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:19:12.441 14:21:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:12.441 "params": { 00:19:12.441 "name": "Nvme1", 00:19:12.441 "trtype": "tcp", 00:19:12.441 "traddr": "10.0.0.2", 00:19:12.441 "adrfam": "ipv4", 00:19:12.441 "trsvcid": "4420", 00:19:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.441 "hdgst": false, 00:19:12.441 "ddgst": false 00:19:12.441 }, 00:19:12.441 "method": "bdev_nvme_attach_controller" 00:19:12.441 }' 00:19:12.441 [2024-06-07 14:21:36.040165] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:12.441 [2024-06-07 14:21:36.040224] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:12.441 [2024-06-07 14:21:36.041342] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:12.441 [2024-06-07 14:21:36.041392] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:12.441 [2024-06-07 14:21:36.042058] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:12.441 [2024-06-07 14:21:36.042105] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:12.441 [2024-06-07 14:21:36.042866] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:12.441 [2024-06-07 14:21:36.042911] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:12.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.702 [2024-06-07 14:21:36.198093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.702 [2024-06-07 14:21:36.216600] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.702 [2024-06-07 14:21:36.256343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.702 [2024-06-07 14:21:36.276376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:19:12.702 [2024-06-07 14:21:36.301289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.702 [2024-06-07 14:21:36.320310] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:19:12.702 [2024-06-07 14:21:36.332322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.963 [2024-06-07 14:21:36.351508] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:19:12.963 Running I/O for 1 seconds... 00:19:12.963 Running I/O for 1 seconds... 00:19:12.963 Running I/O for 1 seconds... 00:19:12.963 Running I/O for 1 seconds... 00:19:13.904 00:19:13.904 Latency(us) 00:19:13.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.904 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:13.904 Nvme1n1 : 1.01 8445.30 32.99 0.00 0.00 15071.11 6826.67 25559.04 00:19:13.904 =================================================================================================================== 00:19:13.904 Total : 8445.30 32.99 0.00 0.00 15071.11 6826.67 25559.04 00:19:13.904 00:19:13.904 Latency(us) 00:19:13.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.904 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:13.904 Nvme1n1 : 1.01 13139.94 51.33 0.00 0.00 9707.59 6225.92 18896.21 00:19:13.904 =================================================================================================================== 00:19:13.904 Total : 13139.94 51.33 0.00 0.00 9707.59 6225.92 18896.21 00:19:13.904 00:19:13.904 Latency(us) 00:19:13.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.904 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:13.904 Nvme1n1 : 1.00 188526.96 736.43 0.00 0.00 676.44 267.95 791.89 00:19:13.904 =================================================================================================================== 00:19:13.904 Total : 188526.96 736.43 0.00 0.00 676.44 267.95 791.89 00:19:14.165 00:19:14.165 Latency(us) 00:19:14.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.165 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:14.165 Nvme1n1 : 1.00 8381.40 32.74 0.00 0.00 15232.84 4287.15 36918.61 00:19:14.165 =================================================================================================================== 00:19:14.165 Total : 8381.40 32.74 0.00 0.00 15232.84 4287.15 36918.61 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 508476 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 508479 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 508482 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.165 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.165 rmmod nvme_tcp 00:19:14.165 rmmod nvme_fabrics 00:19:14.426 rmmod nvme_keyring 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 508176 ']' 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 508176 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 508176 ']' 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 508176 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 508176 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 508176' 00:19:14.426 killing process with pid 508176 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 508176 00:19:14.426 14:21:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 508176 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.426 14:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.973 14:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.973 00:19:16.973 real 0m13.273s 00:19:16.973 user 0m19.122s 00:19:16.973 sys 0m7.200s 00:19:16.973 14:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:16.973 14:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:19:16.973 ************************************ 00:19:16.973 END TEST nvmf_bdev_io_wait 00:19:16.973 ************************************ 00:19:16.973 14:21:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:16.973 14:21:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:16.973 14:21:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:16.973 14:21:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.973 ************************************ 00:19:16.973 START TEST nvmf_queue_depth 00:19:16.973 ************************************ 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:16.973 * Looking for test storage... 00:19:16.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.973 14:21:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.974 14:21:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:19:25.117 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:25.118 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:25.118 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:25.118 Found net devices under 0000:31:00.0: cvl_0_0 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:25.118 Found net devices under 0000:31:00.1: cvl_0_1 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:25.118 14:21:47 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:25.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:19:25.118 00:19:25.118 --- 10.0.0.2 ping statistics --- 00:19:25.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.118 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:25.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:19:25.118 00:19:25.118 --- 10.0.0.1 ping statistics --- 00:19:25.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.118 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=513481 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 513481 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 513481 ']' 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:25.118 14:21:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.118 [2024-06-07 14:21:48.324006] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:25.118 [2024-06-07 14:21:48.324083] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.118 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.118 [2024-06-07 14:21:48.422265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.118 [2024-06-07 14:21:48.468881] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.118 [2024-06-07 14:21:48.468937] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.119 [2024-06-07 14:21:48.468945] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.119 [2024-06-07 14:21:48.468952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.119 [2024-06-07 14:21:48.468958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.119 [2024-06-07 14:21:48.468981] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 [2024-06-07 14:21:49.170902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 Malloc0 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 [2024-06-07 14:21:49.247546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=513603 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 513603 /var/tmp/bdevperf.sock 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 513603 ']' 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:25.691 14:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:25.691 [2024-06-07 14:21:49.313269] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:25.691 [2024-06-07 14:21:49.313330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513603 ] 00:19:25.952 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.952 [2024-06-07 14:21:49.386218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.952 [2024-06-07 14:21:49.425915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.522 14:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:26.522 14:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:19:26.522 14:21:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:26.522 14:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.522 14:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:26.782 NVMe0n1 00:19:26.782 14:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.782 14:21:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.782 Running I/O for 10 seconds... 00:19:39.005 00:19:39.005 Latency(us) 00:19:39.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.005 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:39.005 Verification LBA range: start 0x0 length 0x4000 00:19:39.005 NVMe0n1 : 10.05 11504.92 44.94 0.00 0.00 88730.34 22937.60 79517.01 00:19:39.005 =================================================================================================================== 00:19:39.005 Total : 11504.92 44.94 0.00 0.00 88730.34 22937.60 79517.01 00:19:39.005 0 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 513603 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 513603 ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 513603 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 513603 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 513603' 00:19:39.005 killing process with pid 513603 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 513603 00:19:39.005 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.005 00:19:39.005 Latency(us) 00:19:39.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.005 =================================================================================================================== 00:19:39.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 513603 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.005 rmmod nvme_tcp 00:19:39.005 rmmod nvme_fabrics 00:19:39.005 rmmod nvme_keyring 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 513481 ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 513481 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 513481 ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 513481 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 513481 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 513481' 00:19:39.005 killing process with pid 513481 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 513481 00:19:39.005 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 513481 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.006 14:22:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.612 14:22:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.612 00:19:39.612 real 0m22.773s 00:19:39.612 user 0m25.879s 00:19:39.612 sys 0m7.048s 00:19:39.612 14:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:39.612 14:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:39.612 ************************************ 00:19:39.612 END TEST nvmf_queue_depth 00:19:39.612 ************************************ 00:19:39.612 14:22:02 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:39.612 14:22:02 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:39.612 14:22:02 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:39.612 14:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.612 ************************************ 00:19:39.612 START TEST nvmf_target_multipath 00:19:39.612 ************************************ 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:39.612 * Looking for test storage... 00:19:39.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.612 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.613 14:22:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:47.755 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:47.756 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:47.756 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:47.756 Found net devices under 0000:31:00.0: cvl_0_0 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:47.756 Found net devices under 0000:31:00.1: cvl_0_1 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.756 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.757 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.757 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.757 14:22:10 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:19:47.757 00:19:47.757 --- 10.0.0.2 ping statistics --- 00:19:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.757 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:19:47.757 00:19:47.757 --- 10.0.0.1 ping statistics --- 00:19:47.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.757 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:47.757 only one NIC for nvmf test 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:47.757 rmmod nvme_tcp 00:19:47.757 rmmod nvme_fabrics 00:19:47.757 rmmod nvme_keyring 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.757 14:22:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:50.302 00:19:50.302 real 0m10.361s 00:19:50.302 user 0m2.377s 00:19:50.302 sys 0m5.887s 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:50.302 14:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:50.302 ************************************ 00:19:50.302 END TEST nvmf_target_multipath 00:19:50.302 ************************************ 00:19:50.302 14:22:13 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:50.302 14:22:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:19:50.302 14:22:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:50.302 14:22:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:50.302 ************************************ 00:19:50.302 START TEST nvmf_zcopy 00:19:50.302 ************************************ 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:50.302 * Looking for test storage... 00:19:50.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.302 14:22:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.303 14:22:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:58.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:58.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:58.439 Found net devices under 0000:31:00.0: cvl_0_0 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:58.439 Found net devices under 0000:31:00.1: cvl_0_1 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.439 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:19:58.439 00:19:58.439 --- 10.0.0.2 ping statistics --- 00:19:58.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.440 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:19:58.440 00:19:58.440 --- 10.0.0.1 ping statistics --- 00:19:58.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.440 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=525281 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 525281 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 525281 ']' 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:58.440 14:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:58.440 [2024-06-07 14:22:21.758112] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:58.440 [2024-06-07 14:22:21.758158] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.440 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.440 [2024-06-07 14:22:21.843745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.440 [2024-06-07 14:22:21.873976] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.440 [2024-06-07 14:22:21.874011] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.440 [2024-06-07 14:22:21.874019] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.440 [2024-06-07 14:22:21.874025] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.440 [2024-06-07 14:22:21.874030] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.440 [2024-06-07 14:22:21.874048] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 [2024-06-07 14:22:22.585756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 [2024-06-07 14:22:22.609968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 malloc0 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:59.011 14:22:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:59.272 { 00:19:59.272 "params": { 00:19:59.272 "name": "Nvme$subsystem", 00:19:59.272 "trtype": "$TEST_TRANSPORT", 00:19:59.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:59.272 "adrfam": "ipv4", 00:19:59.272 "trsvcid": "$NVMF_PORT", 00:19:59.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:59.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:59.272 "hdgst": ${hdgst:-false}, 00:19:59.272 "ddgst": ${ddgst:-false} 00:19:59.272 }, 00:19:59.272 "method": "bdev_nvme_attach_controller" 00:19:59.272 } 00:19:59.272 EOF 00:19:59.272 )") 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:59.272 14:22:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:59.272 "params": { 00:19:59.272 "name": "Nvme1", 00:19:59.272 "trtype": "tcp", 00:19:59.272 "traddr": "10.0.0.2", 00:19:59.272 "adrfam": "ipv4", 00:19:59.272 "trsvcid": "4420", 00:19:59.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.272 "hdgst": false, 00:19:59.272 "ddgst": false 00:19:59.272 }, 00:19:59.272 "method": "bdev_nvme_attach_controller" 00:19:59.272 }' 00:19:59.272 [2024-06-07 14:22:22.708046] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:19:59.272 [2024-06-07 14:22:22.708109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525339 ] 00:19:59.272 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.272 [2024-06-07 14:22:22.777979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.272 [2024-06-07 14:22:22.817480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.534 Running I/O for 10 seconds... 00:20:09.534 00:20:09.534 Latency(us) 00:20:09.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.534 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:09.534 Verification LBA range: start 0x0 length 0x1000 00:20:09.534 Nvme1n1 : 10.01 8012.06 62.59 0.00 0.00 15923.30 2198.19 26214.40 00:20:09.534 =================================================================================================================== 00:20:09.534 Total : 8012.06 62.59 0.00 0.00 15923.30 2198.19 26214.40 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=527337 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.534 { 00:20:09.534 "params": { 00:20:09.534 "name": "Nvme$subsystem", 00:20:09.534 "trtype": "$TEST_TRANSPORT", 00:20:09.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.534 "adrfam": "ipv4", 00:20:09.534 "trsvcid": "$NVMF_PORT", 00:20:09.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.534 "hdgst": ${hdgst:-false}, 00:20:09.534 "ddgst": ${ddgst:-false} 00:20:09.534 }, 00:20:09.534 "method": "bdev_nvme_attach_controller" 00:20:09.534 } 00:20:09.534 EOF 00:20:09.534 )") 00:20:09.534 [2024-06-07 14:22:33.119491] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.119520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:20:09.534 14:22:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.534 "params": { 00:20:09.534 "name": "Nvme1", 00:20:09.534 "trtype": "tcp", 00:20:09.534 "traddr": "10.0.0.2", 00:20:09.534 "adrfam": "ipv4", 00:20:09.534 "trsvcid": "4420", 00:20:09.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.534 "hdgst": false, 00:20:09.534 "ddgst": false 00:20:09.534 }, 00:20:09.534 "method": "bdev_nvme_attach_controller" 00:20:09.534 }' 00:20:09.534 [2024-06-07 14:22:33.131493] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.131501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.534 [2024-06-07 14:22:33.143524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.143532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.534 [2024-06-07 14:22:33.155554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.155562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.534 [2024-06-07 14:22:33.158892] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:20:09.534 [2024-06-07 14:22:33.158939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527337 ] 00:20:09.534 [2024-06-07 14:22:33.167585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.167593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.534 [2024-06-07 14:22:33.179617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.534 [2024-06-07 14:22:33.179625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.794 [2024-06-07 14:22:33.191646] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.191654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.203677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.203685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.215708] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.215716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.222291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.794 [2024-06-07 14:22:33.227738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.227747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.239771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.239787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.251798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.251809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.253460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.794 [2024-06-07 14:22:33.263831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.263841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.275865] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.275880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.287894] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.287904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.299922] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.299931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.794 [2024-06-07 14:22:33.311952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.794 [2024-06-07 14:22:33.311961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.323986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.323997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.336016] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.336027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.348046] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.348056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.360076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.360085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.372110] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.372118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.384142] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.384150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.396175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.396184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.408212] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.408222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.420241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.420250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 [2024-06-07 14:22:33.432274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:09.795 [2024-06-07 14:22:33.432290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:09.795 Running I/O for 5 seconds... 00:20:10.055 [2024-06-07 14:22:33.446696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.446711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.459961] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.459978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.473457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.473473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.486759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.486781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.499515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.499531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.512276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.512292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.525452] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.525468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.538665] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.538681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.551120] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.551135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.563633] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.563649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.576689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.576705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.589921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.589936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.602668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.602684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.616178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.616193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.629437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.629452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.642334] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.642350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.654819] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.654834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.667998] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.668014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.681164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.681179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.055 [2024-06-07 14:22:33.693586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.055 [2024-06-07 14:22:33.693601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.706353] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.706370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.719474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.719489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.732675] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.732695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.746611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.746627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.760021] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.760037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.772445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.772461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.784837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.784852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.798044] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.798059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.811397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.811411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.824813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.824828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.838382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.838397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.851976] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.851991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.864810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.864825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.878445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.878460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.891693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.891707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.904990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.905004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.918609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.918625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.932250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.932264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.945250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.945264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.316 [2024-06-07 14:22:33.958065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.316 [2024-06-07 14:22:33.958080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:33.971435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:33.971451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:33.984703] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:33.984722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:33.997804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:33.997819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:34.011304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:34.011320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:34.024476] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:34.024491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:34.037850] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:34.037864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:34.051376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:34.051391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.577 [2024-06-07 14:22:34.064075] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.577 [2024-06-07 14:22:34.064090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.076706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.076721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.090244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.090259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.103481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.103496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.115762] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.115777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.129177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.129192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.142431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.142446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.156036] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.156051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.168794] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.168809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.182070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.182086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.195596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.195611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.209020] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.209035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.578 [2024-06-07 14:22:34.221741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.578 [2024-06-07 14:22:34.221756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.234619] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.234639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.247573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.247588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.260707] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.260722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.274087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.274101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.286486] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.286502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.298941] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.298956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.312280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.312296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.325327] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.325342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.338629] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.338644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.351448] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.351462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.364594] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.364609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.377800] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.377816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.390559] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.390574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.403388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.403404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.416986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.417002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.430094] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.430108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.443431] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.443447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.455852] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.455867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.468511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.468527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:10.839 [2024-06-07 14:22:34.481180] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:10.839 [2024-06-07 14:22:34.481199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.493784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.493799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.507355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.507370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.521037] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.521051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.533608] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.533623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.546532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.546547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.559750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.559765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.572477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.572492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.586001] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.586016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.598702] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.598717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.611864] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.611879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.625388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.625403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.638481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.638495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.651751] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.651766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.664458] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.664473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.677607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.677622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.690046] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.690061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.703014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.703029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.716229] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.716244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.728659] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.728674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.099 [2024-06-07 14:22:34.741438] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.099 [2024-06-07 14:22:34.741453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.754475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.754491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.767759] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.767775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.780569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.780584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.792873] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.792889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.806216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.806231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.819395] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.819411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.832280] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.832295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.846000] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.846015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.859357] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.859372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.872976] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.872991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.885855] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.885870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.898997] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.899012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.912141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.912156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.925308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.925323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.937848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.937864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.950677] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.950693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.963948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.963964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.977345] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.977360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:34.989911] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:34.989926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.363 [2024-06-07 14:22:35.003139] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.363 [2024-06-07 14:22:35.003154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.016600] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.016616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.029739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.029755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.043007] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.043022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.056446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.056462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.068937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.068952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.082346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.082361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.094939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.094955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.108188] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.108209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.121657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.121672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.666 [2024-06-07 14:22:35.134596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.666 [2024-06-07 14:22:35.134612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.147857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.147872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.161363] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.161378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.174047] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.174062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.186231] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.186247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.199313] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.199329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.212551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.212567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.225861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.225877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.238492] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.238507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.251543] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.251559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.265148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.265164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.278447] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.278463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.667 [2024-06-07 14:22:35.291133] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.667 [2024-06-07 14:22:35.291149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.304233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.304248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.317662] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.317677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.330781] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.330796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.344163] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.344179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.356576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.356592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.370004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.370019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.383416] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.383431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.396266] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.396281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.409410] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.409426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.422569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.422585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.434627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.434642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.447652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.447667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.928 [2024-06-07 14:22:35.460537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.928 [2024-06-07 14:22:35.460557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.473069] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.473084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.486486] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.486501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.499539] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.499555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.512896] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.512910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.526427] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.526443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.539678] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.539693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.553093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.553108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:11.929 [2024-06-07 14:22:35.565606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:11.929 [2024-06-07 14:22:35.565621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.578161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.578177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.591645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.591660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.605177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.605192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.617782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.617797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.630525] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.630540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.189 [2024-06-07 14:22:35.643924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.189 [2024-06-07 14:22:35.643939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.656692] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.656706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.669736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.669751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.682889] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.682904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.696093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.696108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.708706] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.708725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.721269] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.721284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.734125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.734140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.747287] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.747302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.759837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.759851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.772074] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.772089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.785456] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.785471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.798262] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.798278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.811623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.811638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.190 [2024-06-07 14:22:35.824432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.190 [2024-06-07 14:22:35.824447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.836931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.836946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.849812] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.849826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.863002] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.863018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.875682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.875697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.888682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.888697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.901869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.901884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.914773] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.914788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.927894] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.927908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.940755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.940770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.954075] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.954094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.967459] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.967474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.980455] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.980470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:35.993585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:35.993600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.006540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.006555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.019317] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.019333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.032522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.032537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.045774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.045789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.058135] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.058150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.070524] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.070539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.083385] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.083400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.451 [2024-06-07 14:22:36.096117] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.451 [2024-06-07 14:22:36.096131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.109222] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.109237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.122054] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.122069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.134644] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.134658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.147643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.147658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.160227] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.160243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.173554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.173569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.186640] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.186655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.198893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.198911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.212210] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.212226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.225219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.225234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.238328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.238343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.251511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.251526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.263843] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.263858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.277017] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.277031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.289749] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.289764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.302883] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.302900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.316405] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.316419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.328898] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.328913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.341859] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.341874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.711 [2024-06-07 14:22:36.354362] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.711 [2024-06-07 14:22:36.354378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.367830] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.367846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.381233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.381248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.394689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.394705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.406991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.407006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.419857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.419872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.432174] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.432189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.445674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.445689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.458930] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.458945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.472125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.472141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.485433] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.485449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.498870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.498887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.511481] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.511497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.524910] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.524925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.537651] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.537667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.550970] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.550986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.564205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.564220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.577557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.577573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.590375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.590391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.603468] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.603483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:12.972 [2024-06-07 14:22:36.616060] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:12.972 [2024-06-07 14:22:36.616075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.628735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.628751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.642024] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.642041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.655037] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.655052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.668394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.668409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.681119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.681134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.694482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.694497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.707531] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.707546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.720574] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.720590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.734033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.734049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.746604] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.746619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.759921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.759936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.772929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.772944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.785959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.785974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.798753] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.798768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.811718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.811734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.824436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.824451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.837686] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.837701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.850568] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.850583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.863892] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.863907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.234 [2024-06-07 14:22:36.876175] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.234 [2024-06-07 14:22:36.876190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.495 [2024-06-07 14:22:36.889065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.495 [2024-06-07 14:22:36.889080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.495 [2024-06-07 14:22:36.902071] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.495 [2024-06-07 14:22:36.902086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.914618] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.914634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.927946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.927961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.941152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.941168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.954181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.954202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.966900] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.966915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.979947] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.979963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:36.993449] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:36.993464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.006059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.006075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.019402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.019418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.032695] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.032710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.045612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.045627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.059040] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.059055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.071956] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.071971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.085132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.085148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.098272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.098287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.111383] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.111398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.124233] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.124249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.496 [2024-06-07 14:22:37.136914] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.496 [2024-06-07 14:22:37.136930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.149717] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.149732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.162693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.162708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.175623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.175638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.188667] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.188682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.201014] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.201029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.213733] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.213748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.226945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.226960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.240255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.240270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.253089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.253104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.265697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.265712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.278617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.278632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.291034] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.291049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.303916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.303930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.316709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.316723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.329804] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.329819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.342592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.342607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.355739] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.355754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.368346] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.368361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.381229] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.381244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:13.757 [2024-06-07 14:22:37.394921] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:13.757 [2024-06-07 14:22:37.394935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.408030] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.408046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.421451] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.421470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.434201] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.434217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.446908] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.446923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.460445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.460461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.472754] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.472769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.485465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.485479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.498548] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.498563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.511870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.511886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.525276] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.525292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.538538] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.538553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.551865] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.551880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.564778] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.564793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.578234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.578249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.591370] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.591385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.604644] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.604659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.617823] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.617838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.631231] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.631246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.018 [2024-06-07 14:22:37.644433] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.018 [2024-06-07 14:22:37.644448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.019 [2024-06-07 14:22:37.656617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.019 [2024-06-07 14:22:37.656632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.669189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.669213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.681657] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.681671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.694678] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.694693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.707170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.707184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.720053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.720067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.732846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.732861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.745567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.745582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.758963] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.758978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.772437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.772452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.784798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.784813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.798059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.798074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.811010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.811025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.824130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.824144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.837033] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.837048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.849774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.849789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.862526] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.862541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.280 [2024-06-07 14:22:37.874995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.280 [2024-06-07 14:22:37.875010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.281 [2024-06-07 14:22:37.888066] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.281 [2024-06-07 14:22:37.888081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.281 [2024-06-07 14:22:37.901010] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.281 [2024-06-07 14:22:37.901025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.281 [2024-06-07 14:22:37.914441] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.281 [2024-06-07 14:22:37.914460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.541 [2024-06-07 14:22:37.927599] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.541 [2024-06-07 14:22:37.927615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.541 [2024-06-07 14:22:37.940482] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.541 [2024-06-07 14:22:37.940497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.541 [2024-06-07 14:22:37.953463] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.541 [2024-06-07 14:22:37.953478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.541 [2024-06-07 14:22:37.966755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.541 [2024-06-07 14:22:37.966770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.541 [2024-06-07 14:22:37.979541] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:37.979556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:37.993141] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:37.993156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.005606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.005621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.018531] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.018546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.031715] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.031730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.045255] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.045270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.058784] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.058799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.071128] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.071143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.084467] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.084482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.097890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.097905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.111236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.111251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.123809] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.123824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.137182] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.137202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.150400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.150415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.163617] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.163639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.542 [2024-06-07 14:22:38.176315] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.542 [2024-06-07 14:22:38.176330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.803 [2024-06-07 14:22:38.188999] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.189014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.201916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.201931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.215177] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.215192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.228216] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.228231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.241709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.241724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.255080] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.255095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.267579] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.267595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.280975] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.280989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.293828] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.293843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.307308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.307324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.319825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.319840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.332945] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.332960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.346400] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.346416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.358639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.358654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.371816] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.371831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.385130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.385145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.398487] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.398502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.411880] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.411895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.425331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.425346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:14.804 [2024-06-07 14:22:38.438593] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:14.804 [2024-06-07 14:22:38.438609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.451300] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.451316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 00:20:15.066 Latency(us) 00:20:15.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.066 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:15.066 Nvme1n1 : 5.01 19534.55 152.61 0.00 0.00 6544.87 2867.20 15947.09 00:20:15.066 =================================================================================================================== 00:20:15.066 Total : 19534.55 152.61 0.00 0.00 6544.87 2867.20 15947.09 00:20:15.066 [2024-06-07 14:22:38.460598] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.460612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.472631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.472644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.484658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.484671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.496691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.496702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.508721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.508731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.520746] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.520755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.532777] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.532786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.544810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.544821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.556840] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.556852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 [2024-06-07 14:22:38.568870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:15.066 [2024-06-07 14:22:38.568879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:15.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (527337) - No such process 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 527337 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:15.066 delay0 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.066 14:22:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:15.066 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.066 [2024-06-07 14:22:38.708724] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:23.201 Initializing NVMe Controllers 00:20:23.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:23.202 Initialization complete. Launching workers. 00:20:23.202 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 30420 00:20:23.202 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30521, failed to submit 138 00:20:23.202 success 30436, unsuccess 85, failed 0 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.202 rmmod nvme_tcp 00:20:23.202 rmmod nvme_fabrics 00:20:23.202 rmmod nvme_keyring 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 525281 ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 525281 ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 525281' 00:20:23.202 killing process with pid 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 525281 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.202 14:22:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.584 14:22:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.584 00:20:24.584 real 0m34.609s 00:20:24.584 user 0m45.353s 00:20:24.584 sys 0m11.579s 00:20:24.584 14:22:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:24.584 14:22:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 ************************************ 00:20:24.584 END TEST nvmf_zcopy 00:20:24.584 ************************************ 00:20:24.584 14:22:48 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:24.584 14:22:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:24.584 14:22:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:24.584 14:22:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.584 ************************************ 00:20:24.584 START TEST nvmf_nmic 00:20:24.584 ************************************ 00:20:24.584 14:22:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:24.845 * Looking for test storage... 00:20:24.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.845 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.846 14:22:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.985 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:32.986 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:32.986 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:32.986 Found net devices under 0000:31:00.0: cvl_0_0 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:32.986 Found net devices under 0000:31:00.1: cvl_0_1 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.986 14:22:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.798 ms 00:20:32.986 00:20:32.986 --- 10.0.0.2 ping statistics --- 00:20:32.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.986 rtt min/avg/max/mdev = 0.798/0.798/0.798/0.000 ms 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:20:32.986 00:20:32.986 --- 10.0.0.1 ping statistics --- 00:20:32.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.986 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=534522 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 534522 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 534522 ']' 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.986 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:32.987 14:22:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:32.987 [2024-06-07 14:22:56.392412] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:20:32.987 [2024-06-07 14:22:56.392476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.987 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.987 [2024-06-07 14:22:56.473858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.987 [2024-06-07 14:22:56.515201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.987 [2024-06-07 14:22:56.515247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.987 [2024-06-07 14:22:56.515255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.987 [2024-06-07 14:22:56.515262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.987 [2024-06-07 14:22:56.515268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.987 [2024-06-07 14:22:56.515341] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.987 [2024-06-07 14:22:56.515457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.987 [2024-06-07 14:22:56.515612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.987 [2024-06-07 14:22:56.515613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.557 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:33.557 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:20:33.557 14:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.557 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:33.557 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 [2024-06-07 14:22:57.220816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 Malloc0 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 [2024-06-07 14:22:57.280154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:33.818 test case1: single bdev can't be used in multiple subsystems 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.818 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.819 [2024-06-07 14:22:57.316120] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:33.819 [2024-06-07 14:22:57.316139] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:33.819 [2024-06-07 14:22:57.316146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:33.819 request: 00:20:33.819 { 00:20:33.819 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.819 "namespace": { 00:20:33.819 "bdev_name": "Malloc0", 00:20:33.819 "no_auto_visible": false 00:20:33.819 }, 00:20:33.819 "method": "nvmf_subsystem_add_ns", 00:20:33.819 "req_id": 1 00:20:33.819 } 00:20:33.819 Got JSON-RPC error response 00:20:33.819 response: 00:20:33.819 { 00:20:33.819 "code": -32602, 00:20:33.819 "message": "Invalid parameters" 00:20:33.819 } 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:33.819 Adding namespace failed - expected result. 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:33.819 test case2: host connect to nvmf target in multiple paths 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:33.819 [2024-06-07 14:22:57.328246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.819 14:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:35.732 14:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:36.675 14:23:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:36.675 14:23:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:20:36.675 14:23:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.675 14:23:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:20:36.675 14:23:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:20:39.262 14:23:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:39.262 [global] 00:20:39.262 thread=1 00:20:39.262 invalidate=1 00:20:39.262 rw=write 00:20:39.262 time_based=1 00:20:39.262 runtime=1 00:20:39.262 ioengine=libaio 00:20:39.262 direct=1 00:20:39.262 bs=4096 00:20:39.262 iodepth=1 00:20:39.262 norandommap=0 00:20:39.262 numjobs=1 00:20:39.262 00:20:39.262 verify_dump=1 00:20:39.262 verify_backlog=512 00:20:39.262 verify_state_save=0 00:20:39.262 do_verify=1 00:20:39.262 verify=crc32c-intel 00:20:39.262 [job0] 00:20:39.262 filename=/dev/nvme0n1 00:20:39.262 Could not set queue depth (nvme0n1) 00:20:39.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.262 fio-3.35 00:20:39.262 Starting 1 thread 00:20:40.202 00:20:40.202 job0: (groupid=0, jobs=1): err= 0: pid=535880: Fri Jun 7 14:23:03 2024 00:20:40.202 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:20:40.202 slat (nsec): min=6754, max=61232, avg=26112.66, stdev=2570.80 00:20:40.202 clat (usec): min=566, max=1237, avg=991.93, stdev=72.09 00:20:40.202 lat (usec): min=592, max=1262, avg=1018.04, stdev=72.09 00:20:40.202 clat percentiles (usec): 00:20:40.202 | 1.00th=[ 799], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 947], 00:20:40.202 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:20:40.202 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1090], 00:20:40.202 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:20:40.202 | 99.99th=[ 1237] 00:20:40.202 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:20:40.202 slat (usec): min=9, max=28173, avg=69.06, stdev=1044.55 00:20:40.202 clat (usec): min=273, max=821, avg=577.36, stdev=91.23 00:20:40.202 lat (usec): min=282, max=28871, avg=646.42, stdev=1053.30 00:20:40.202 clat percentiles (usec): 00:20:40.202 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 461], 20.00th=[ 498], 00:20:40.202 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 603], 00:20:40.202 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 685], 95.00th=[ 709], 00:20:40.202 | 99.00th=[ 742], 99.50th=[ 775], 99.90th=[ 824], 99.95th=[ 824], 00:20:40.202 | 99.99th=[ 824] 00:20:40.202 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:40.202 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:40.202 lat (usec) : 500=11.95%, 750=46.37%, 1000=21.97% 00:20:40.202 lat (msec) : 2=19.71% 00:20:40.202 cpu : usr=3.50%, sys=3.80%, ctx=1241, majf=0, minf=1 00:20:40.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.202 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.202 00:20:40.202 Run status group 0 (all jobs): 00:20:40.202 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:20:40.202 WRITE: bw=2901KiB/s (2971kB/s), 2901KiB/s-2901KiB/s (2971kB/s-2971kB/s), io=2904KiB (2974kB), run=1001-1001msec 00:20:40.202 00:20:40.202 Disk stats (read/write): 00:20:40.202 nvme0n1: ios=537/561, merge=0/0, ticks=1473/253, in_queue=1726, util=98.90% 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.462 14:23:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.462 rmmod nvme_tcp 00:20:40.462 rmmod nvme_fabrics 00:20:40.462 rmmod nvme_keyring 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 534522 ']' 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 534522 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 534522 ']' 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 534522 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:40.462 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 534522 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 534522' 00:20:40.723 killing process with pid 534522 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 534522 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 534522 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.723 14:23:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.265 14:23:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.265 00:20:43.265 real 0m18.176s 00:20:43.265 user 0m48.212s 00:20:43.265 sys 0m6.697s 00:20:43.265 14:23:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:20:43.265 14:23:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:20:43.265 ************************************ 00:20:43.265 END TEST nvmf_nmic 00:20:43.265 ************************************ 00:20:43.265 14:23:06 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:43.265 14:23:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:20:43.265 14:23:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:20:43.265 14:23:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.265 ************************************ 00:20:43.265 START TEST nvmf_fio_target 00:20:43.265 ************************************ 00:20:43.265 14:23:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:43.265 * Looking for test storage... 00:20:43.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.265 14:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.265 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:20:43.265 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.265 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.266 14:23:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.401 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.401 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.401 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.401 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:51.401 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:20:51.402 00:20:51.402 --- 10.0.0.2 ping statistics --- 00:20:51.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.402 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:20:51.402 00:20:51.402 --- 10.0.0.1 ping statistics --- 00:20:51.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.402 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=540891 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 540891 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 540891 ']' 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:51.402 14:23:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.402 [2024-06-07 14:23:14.756266] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:20:51.402 [2024-06-07 14:23:14.756333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.402 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.402 [2024-06-07 14:23:14.834186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.402 [2024-06-07 14:23:14.874956] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.402 [2024-06-07 14:23:14.874996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.402 [2024-06-07 14:23:14.875004] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.402 [2024-06-07 14:23:14.875010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.402 [2024-06-07 14:23:14.875016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.402 [2024-06-07 14:23:14.875158] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.402 [2024-06-07 14:23:14.875303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.402 [2024-06-07 14:23:14.875576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.402 [2024-06-07 14:23:14.875578] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.972 14:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:52.232 [2024-06-07 14:23:15.722369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.232 14:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.492 14:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:52.492 14:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.492 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:52.492 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.752 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:52.752 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.014 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:53.014 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:53.014 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.275 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:53.275 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.536 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:53.536 14:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:53.536 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:53.536 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:53.797 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:54.058 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:54.058 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.058 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:54.058 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:54.319 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.319 [2024-06-07 14:23:17.916181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.319 14:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:54.580 14:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:54.841 14:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:20:56.227 14:23:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:20:58.774 14:23:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:58.774 [global] 00:20:58.774 thread=1 00:20:58.774 invalidate=1 00:20:58.774 rw=write 00:20:58.774 time_based=1 00:20:58.774 runtime=1 00:20:58.774 ioengine=libaio 00:20:58.774 direct=1 00:20:58.774 bs=4096 00:20:58.774 iodepth=1 00:20:58.774 norandommap=0 00:20:58.774 numjobs=1 00:20:58.774 00:20:58.774 verify_dump=1 00:20:58.774 verify_backlog=512 00:20:58.774 verify_state_save=0 00:20:58.774 do_verify=1 00:20:58.774 verify=crc32c-intel 00:20:58.774 [job0] 00:20:58.774 filename=/dev/nvme0n1 00:20:58.774 [job1] 00:20:58.774 filename=/dev/nvme0n2 00:20:58.774 [job2] 00:20:58.774 filename=/dev/nvme0n3 00:20:58.774 [job3] 00:20:58.774 filename=/dev/nvme0n4 00:20:58.775 Could not set queue depth (nvme0n1) 00:20:58.775 Could not set queue depth (nvme0n2) 00:20:58.775 Could not set queue depth (nvme0n3) 00:20:58.775 Could not set queue depth (nvme0n4) 00:20:58.775 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.775 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.775 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.775 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:58.775 fio-3.35 00:20:58.775 Starting 4 threads 00:21:00.270 00:21:00.270 job0: (groupid=0, jobs=1): err= 0: pid=542504: Fri Jun 7 14:23:23 2024 00:21:00.270 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:00.270 slat (nsec): min=7781, max=58376, avg=24194.99, stdev=3097.18 00:21:00.270 clat (usec): min=559, max=1188, avg=991.80, stdev=96.71 00:21:00.270 lat (usec): min=582, max=1212, avg=1016.00, stdev=96.78 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 930], 00:21:00.270 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:21:00.270 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:21:00.270 | 99.00th=[ 1156], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:21:00.270 | 99.99th=[ 1188] 00:21:00.270 write: IOPS=745, BW=2981KiB/s (3053kB/s)(2984KiB/1001msec); 0 zone resets 00:21:00.270 slat (nsec): min=9006, max=51351, avg=26545.39, stdev=9393.12 00:21:00.270 clat (usec): min=136, max=902, avg=604.59, stdev=128.14 00:21:00.270 lat (usec): min=146, max=933, avg=631.14, stdev=133.33 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 251], 5.00th=[ 338], 10.00th=[ 420], 20.00th=[ 502], 00:21:00.270 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:21:00.270 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 766], 00:21:00.270 | 99.00th=[ 816], 99.50th=[ 848], 99.90th=[ 906], 99.95th=[ 906], 00:21:00.270 | 99.99th=[ 906] 00:21:00.270 bw ( KiB/s): min= 4096, max= 4096, per=35.29%, avg=4096.00, stdev= 0.00, samples=1 00:21:00.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:00.270 lat (usec) : 250=0.56%, 500=10.73%, 750=44.04%, 1000=21.78% 00:21:00.270 lat (msec) : 2=22.89% 00:21:00.270 cpu : usr=1.50%, sys=3.70%, ctx=1258, majf=0, minf=1 00:21:00.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 issued rwts: total=512,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:00.270 job1: (groupid=0, jobs=1): err= 0: pid=542505: Fri Jun 7 14:23:23 2024 00:21:00.270 read: IOPS=712, BW=2851KiB/s (2919kB/s)(2956KiB/1037msec) 00:21:00.270 slat (nsec): min=4163, max=42859, avg=16649.19, stdev=5403.59 00:21:00.270 clat (usec): min=280, max=43071, avg=740.33, stdev=1568.60 00:21:00.270 lat (usec): min=290, max=43096, avg=756.98, stdev=1569.06 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 338], 5.00th=[ 424], 10.00th=[ 478], 20.00th=[ 545], 00:21:00.270 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 717], 00:21:00.270 | 70.00th=[ 783], 80.00th=[ 848], 90.00th=[ 922], 95.00th=[ 963], 00:21:00.270 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[43254], 99.95th=[43254], 00:21:00.270 | 99.99th=[43254] 00:21:00.270 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:21:00.270 slat (usec): min=5, max=1208, avg=19.47, stdev=39.10 00:21:00.270 clat (usec): min=104, max=874, avg=436.30, stdev=135.23 00:21:00.270 lat (usec): min=110, max=2082, avg=455.77, stdev=145.99 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 128], 5.00th=[ 231], 10.00th=[ 265], 20.00th=[ 318], 00:21:00.270 | 30.00th=[ 359], 40.00th=[ 400], 50.00th=[ 441], 60.00th=[ 469], 00:21:00.270 | 70.00th=[ 502], 80.00th=[ 545], 90.00th=[ 611], 95.00th=[ 668], 00:21:00.270 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 857], 99.95th=[ 873], 00:21:00.270 | 99.99th=[ 873] 00:21:00.270 bw ( KiB/s): min= 4096, max= 4096, per=35.29%, avg=4096.00, stdev= 0.00, samples=2 00:21:00.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:21:00.270 lat (usec) : 250=4.59%, 500=40.78%, 750=38.74%, 1000=14.52% 00:21:00.270 lat (msec) : 2=1.30%, 50=0.06% 00:21:00.270 cpu : usr=1.74%, sys=2.70%, ctx=1767, majf=0, minf=1 00:21:00.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 issued rwts: total=739,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:00.270 job2: (groupid=0, jobs=1): err= 0: pid=542512: Fri Jun 7 14:23:23 2024 00:21:00.270 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:21:00.270 slat (nsec): min=26274, max=28186, avg=26824.29, stdev=375.17 00:21:00.270 clat (usec): min=855, max=43125, avg=39926.10, stdev=8987.47 00:21:00.270 lat (usec): min=884, max=43152, avg=39952.93, stdev=8987.15 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 857], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:21:00.270 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:21:00.270 | 70.00th=[42206], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:21:00.270 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:00.270 | 99.99th=[43254] 00:21:00.270 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:21:00.270 slat (usec): min=9, max=2013, avg=28.37, stdev=88.98 00:21:00.270 clat (usec): min=116, max=1182, avg=292.88, stdev=83.31 00:21:00.270 lat (usec): min=128, max=3195, avg=321.25, stdev=149.10 00:21:00.270 clat percentiles (usec): 00:21:00.270 | 1.00th=[ 125], 5.00th=[ 139], 10.00th=[ 192], 20.00th=[ 249], 00:21:00.270 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 310], 00:21:00.270 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 408], 00:21:00.270 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 1188], 99.95th=[ 1188], 00:21:00.270 | 99.99th=[ 1188] 00:21:00.270 bw ( KiB/s): min= 4096, max= 4096, per=35.29%, avg=4096.00, stdev= 0.00, samples=1 00:21:00.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:00.270 lat (usec) : 250=19.51%, 500=75.61%, 750=0.75%, 1000=0.19% 00:21:00.270 lat (msec) : 2=0.19%, 50=3.75% 00:21:00.270 cpu : usr=0.79%, sys=1.59%, ctx=536, majf=0, minf=1 00:21:00.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.270 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:00.270 job3: (groupid=0, jobs=1): err= 0: pid=542513: Fri Jun 7 14:23:23 2024 00:21:00.270 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:21:00.270 slat (nsec): min=7021, max=44459, avg=26017.76, stdev=2634.25 00:21:00.270 clat (usec): min=522, max=1285, avg=984.25, stdev=118.18 00:21:00.271 lat (usec): min=549, max=1311, avg=1010.27, stdev=118.23 00:21:00.271 clat percentiles (usec): 00:21:00.271 | 1.00th=[ 652], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 898], 00:21:00.271 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1020], 00:21:00.271 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:21:00.271 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:21:00.271 | 99.99th=[ 1287] 00:21:00.271 write: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec); 0 zone resets 00:21:00.271 slat (nsec): min=9863, max=64414, avg=31197.32, stdev=9437.84 00:21:00.271 clat (usec): min=276, max=1098, avg=618.27, stdev=128.36 00:21:00.271 lat (usec): min=287, max=1138, avg=649.47, stdev=130.41 00:21:00.271 clat percentiles (usec): 00:21:00.271 | 1.00th=[ 306], 5.00th=[ 404], 10.00th=[ 441], 20.00th=[ 506], 00:21:00.271 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:21:00.271 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 816], 00:21:00.271 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 1106], 99.95th=[ 1106], 00:21:00.271 | 99.99th=[ 1106] 00:21:00.271 bw ( KiB/s): min= 4096, max= 4096, per=35.29%, avg=4096.00, stdev= 0.00, samples=1 00:21:00.271 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:00.271 lat (usec) : 500=11.30%, 750=40.76%, 1000=27.52% 00:21:00.271 lat (msec) : 2=20.42% 00:21:00.271 cpu : usr=1.30%, sys=4.30%, ctx=1242, majf=0, minf=1 00:21:00.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:00.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.271 issued rwts: total=512,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:00.271 00:21:00.271 Run status group 0 (all jobs): 00:21:00.271 READ: bw=6881KiB/s (7047kB/s), 83.3KiB/s-2851KiB/s (85.3kB/s-2919kB/s), io=7136KiB (7307kB), run=1001-1037msec 00:21:00.271 WRITE: bw=11.3MiB/s (11.9MB/s), 2032KiB/s-3950KiB/s (2081kB/s-4045kB/s), io=11.8MiB (12.3MB), run=1001-1037msec 00:21:00.271 00:21:00.271 Disk stats (read/write): 00:21:00.271 nvme0n1: ios=553/512, merge=0/0, ticks=544/284, in_queue=828, util=86.67% 00:21:00.271 nvme0n2: ios=581/1024, merge=0/0, ticks=495/433, in_queue=928, util=90.71% 00:21:00.271 nvme0n3: ios=63/512, merge=0/0, ticks=734/120, in_queue=854, util=94.83% 00:21:00.271 nvme0n4: ios=541/512, merge=0/0, ticks=619/302, in_queue=921, util=96.90% 00:21:00.271 14:23:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:21:00.271 [global] 00:21:00.271 thread=1 00:21:00.271 invalidate=1 00:21:00.271 rw=randwrite 00:21:00.271 time_based=1 00:21:00.271 runtime=1 00:21:00.271 ioengine=libaio 00:21:00.271 direct=1 00:21:00.271 bs=4096 00:21:00.271 iodepth=1 00:21:00.271 norandommap=0 00:21:00.271 numjobs=1 00:21:00.271 00:21:00.271 verify_dump=1 00:21:00.271 verify_backlog=512 00:21:00.271 verify_state_save=0 00:21:00.271 do_verify=1 00:21:00.271 verify=crc32c-intel 00:21:00.271 [job0] 00:21:00.271 filename=/dev/nvme0n1 00:21:00.271 [job1] 00:21:00.271 filename=/dev/nvme0n2 00:21:00.271 [job2] 00:21:00.271 filename=/dev/nvme0n3 00:21:00.271 [job3] 00:21:00.271 filename=/dev/nvme0n4 00:21:00.271 Could not set queue depth (nvme0n1) 00:21:00.271 Could not set queue depth (nvme0n2) 00:21:00.271 Could not set queue depth (nvme0n3) 00:21:00.271 Could not set queue depth (nvme0n4) 00:21:00.532 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.532 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.532 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.532 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:00.532 fio-3.35 00:21:00.532 Starting 4 threads 00:21:01.916 00:21:01.916 job0: (groupid=0, jobs=1): err= 0: pid=543008: Fri Jun 7 14:23:25 2024 00:21:01.916 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:21:01.916 slat (nsec): min=25884, max=27177, avg=26226.11, stdev=325.04 00:21:01.916 clat (usec): min=40885, max=42083, avg=41654.55, stdev=450.55 00:21:01.916 lat (usec): min=40911, max=42110, avg=41680.78, stdev=450.61 00:21:01.916 clat percentiles (usec): 00:21:01.916 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:21:01.916 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:21:01.916 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:01.916 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:21:01.916 | 99.99th=[42206] 00:21:01.916 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:21:01.916 slat (nsec): min=8314, max=52102, avg=29545.26, stdev=8234.52 00:21:01.916 clat (usec): min=135, max=888, avg=432.65, stdev=117.11 00:21:01.916 lat (usec): min=146, max=920, avg=462.20, stdev=120.48 00:21:01.916 clat percentiles (usec): 00:21:01.916 | 1.00th=[ 215], 5.00th=[ 255], 10.00th=[ 285], 20.00th=[ 330], 00:21:01.917 | 30.00th=[ 351], 40.00th=[ 375], 50.00th=[ 437], 60.00th=[ 469], 00:21:01.917 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 619], 00:21:01.917 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 889], 99.95th=[ 889], 00:21:01.917 | 99.99th=[ 889] 00:21:01.917 bw ( KiB/s): min= 4096, max= 4096, per=33.96%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.917 lat (usec) : 250=4.14%, 500=62.71%, 750=29.19%, 1000=0.38% 00:21:01.917 lat (msec) : 50=3.58% 00:21:01.917 cpu : usr=1.16%, sys=1.84%, ctx=531, majf=0, minf=1 00:21:01.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.917 job1: (groupid=0, jobs=1): err= 0: pid=543010: Fri Jun 7 14:23:25 2024 00:21:01.917 read: IOPS=660, BW=2641KiB/s (2705kB/s)(2644KiB/1001msec) 00:21:01.917 slat (nsec): min=6302, max=54518, avg=22768.52, stdev=7529.43 00:21:01.917 clat (usec): min=528, max=959, avg=760.62, stdev=67.99 00:21:01.917 lat (usec): min=553, max=984, avg=783.39, stdev=69.99 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 586], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:21:01.917 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 791], 00:21:01.917 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 840], 95.00th=[ 857], 00:21:01.917 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 963], 00:21:01.917 | 99.99th=[ 963] 00:21:01.917 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:21:01.917 slat (nsec): min=8963, max=72259, avg=26260.89, stdev=9474.25 00:21:01.917 clat (usec): min=193, max=966, avg=433.33, stdev=102.39 00:21:01.917 lat (usec): min=224, max=998, avg=459.60, stdev=105.03 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 215], 5.00th=[ 269], 10.00th=[ 306], 20.00th=[ 351], 00:21:01.917 | 30.00th=[ 375], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 461], 00:21:01.917 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 545], 95.00th=[ 603], 00:21:01.917 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 799], 99.95th=[ 963], 00:21:01.917 | 99.99th=[ 963] 00:21:01.917 bw ( KiB/s): min= 4096, max= 4096, per=33.96%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.917 lat (usec) : 250=2.43%, 500=45.40%, 750=26.71%, 1000=25.46% 00:21:01.917 cpu : usr=2.30%, sys=4.40%, ctx=1687, majf=0, minf=1 00:21:01.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 issued rwts: total=661,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.917 job2: (groupid=0, jobs=1): err= 0: pid=543017: Fri Jun 7 14:23:25 2024 00:21:01.917 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:21:01.917 slat (nsec): min=6896, max=61079, avg=24807.48, stdev=6782.00 00:21:01.917 clat (usec): min=267, max=1796, avg=570.50, stdev=98.40 00:21:01.917 lat (usec): min=274, max=1823, avg=595.31, stdev=99.18 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 457], 00:21:01.917 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 619], 00:21:01.917 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 660], 95.00th=[ 676], 00:21:01.917 | 99.00th=[ 701], 99.50th=[ 725], 99.90th=[ 750], 99.95th=[ 1795], 00:21:01.917 | 99.99th=[ 1795] 00:21:01.917 write: IOPS=1065, BW=4264KiB/s (4366kB/s)(4268KiB/1001msec); 0 zone resets 00:21:01.917 slat (nsec): min=9347, max=65521, avg=25337.29, stdev=11368.75 00:21:01.917 clat (usec): min=112, max=673, avg=325.21, stdev=81.03 00:21:01.917 lat (usec): min=122, max=705, avg=350.55, stdev=82.98 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 190], 5.00th=[ 210], 10.00th=[ 229], 20.00th=[ 265], 00:21:01.917 | 30.00th=[ 281], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 334], 00:21:01.917 | 70.00th=[ 347], 80.00th=[ 375], 90.00th=[ 420], 95.00th=[ 498], 00:21:01.917 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 676], 00:21:01.917 | 99.99th=[ 676] 00:21:01.917 bw ( KiB/s): min= 4096, max= 4096, per=33.96%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.917 lat (usec) : 250=7.36%, 500=53.32%, 750=39.22%, 1000=0.05% 00:21:01.917 lat (msec) : 2=0.05% 00:21:01.917 cpu : usr=2.50%, sys=5.80%, ctx=2095, majf=0, minf=1 00:21:01.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 issued rwts: total=1024,1067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.917 job3: (groupid=0, jobs=1): err= 0: pid=543021: Fri Jun 7 14:23:25 2024 00:21:01.917 read: IOPS=401, BW=1606KiB/s (1645kB/s)(1608KiB/1001msec) 00:21:01.917 slat (nsec): min=24185, max=41359, avg=24983.76, stdev=1905.92 00:21:01.917 clat (usec): min=771, max=43111, avg=1572.55, stdev=4606.44 00:21:01.917 lat (usec): min=795, max=43136, avg=1597.54, stdev=4606.41 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 947], 20.00th=[ 1020], 00:21:01.917 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1074], 00:21:01.917 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:21:01.917 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:21:01.917 | 99.99th=[43254] 00:21:01.917 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:21:01.917 slat (nsec): min=9679, max=73243, avg=29315.13, stdev=6605.37 00:21:01.917 clat (usec): min=305, max=995, avg=655.59, stdev=136.79 00:21:01.917 lat (usec): min=320, max=1026, avg=684.91, stdev=137.92 00:21:01.917 clat percentiles (usec): 00:21:01.917 | 1.00th=[ 363], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 537], 00:21:01.917 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 701], 00:21:01.917 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 840], 95.00th=[ 873], 00:21:01.917 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 996], 99.95th=[ 996], 00:21:01.917 | 99.99th=[ 996] 00:21:01.917 bw ( KiB/s): min= 4096, max= 4096, per=33.96%, avg=4096.00, stdev= 0.00, samples=1 00:21:01.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:21:01.917 lat (usec) : 500=8.10%, 750=33.92%, 1000=21.44% 00:21:01.917 lat (msec) : 2=36.00%, 50=0.55% 00:21:01.917 cpu : usr=1.70%, sys=2.30%, ctx=915, majf=0, minf=1 00:21:01.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.917 issued rwts: total=402,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:01.917 00:21:01.917 Run status group 0 (all jobs): 00:21:01.917 READ: bw=8155KiB/s (8351kB/s), 73.6KiB/s-4092KiB/s (75.3kB/s-4190kB/s), io=8424KiB (8626kB), run=1001-1033msec 00:21:01.917 WRITE: bw=11.8MiB/s (12.4MB/s), 1983KiB/s-4264KiB/s (2030kB/s-4366kB/s), io=12.2MiB (12.8MB), run=1001-1033msec 00:21:01.917 00:21:01.917 Disk stats (read/write): 00:21:01.917 nvme0n1: ios=64/512, merge=0/0, ticks=643/183, in_queue=826, util=87.37% 00:21:01.917 nvme0n2: ios=562/930, merge=0/0, ticks=484/395, in_queue=879, util=91.85% 00:21:01.917 nvme0n3: ios=795/1024, merge=0/0, ticks=672/311, in_queue=983, util=96.52% 00:21:01.917 nvme0n4: ios=308/512, merge=0/0, ticks=584/327, in_queue=911, util=95.83% 00:21:01.917 14:23:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:21:01.917 [global] 00:21:01.917 thread=1 00:21:01.917 invalidate=1 00:21:01.917 rw=write 00:21:01.917 time_based=1 00:21:01.917 runtime=1 00:21:01.917 ioengine=libaio 00:21:01.917 direct=1 00:21:01.917 bs=4096 00:21:01.917 iodepth=128 00:21:01.917 norandommap=0 00:21:01.917 numjobs=1 00:21:01.917 00:21:01.917 verify_dump=1 00:21:01.917 verify_backlog=512 00:21:01.917 verify_state_save=0 00:21:01.917 do_verify=1 00:21:01.917 verify=crc32c-intel 00:21:01.917 [job0] 00:21:01.917 filename=/dev/nvme0n1 00:21:01.917 [job1] 00:21:01.917 filename=/dev/nvme0n2 00:21:01.917 [job2] 00:21:01.917 filename=/dev/nvme0n3 00:21:01.917 [job3] 00:21:01.917 filename=/dev/nvme0n4 00:21:01.917 Could not set queue depth (nvme0n1) 00:21:01.917 Could not set queue depth (nvme0n2) 00:21:01.917 Could not set queue depth (nvme0n3) 00:21:01.917 Could not set queue depth (nvme0n4) 00:21:02.177 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:02.177 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:02.177 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:02.177 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:02.177 fio-3.35 00:21:02.177 Starting 4 threads 00:21:03.562 00:21:03.562 job0: (groupid=0, jobs=1): err= 0: pid=543539: Fri Jun 7 14:23:26 2024 00:21:03.562 read: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(24.0MiB/1015msec) 00:21:03.562 slat (nsec): min=919, max=10173k, avg=90228.26, stdev=702369.42 00:21:03.562 clat (usec): min=3282, max=21755, avg=10924.45, stdev=2808.48 00:21:03.562 lat (usec): min=3287, max=24063, avg=11014.68, stdev=2860.01 00:21:03.562 clat percentiles (usec): 00:21:03.562 | 1.00th=[ 4424], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9241], 00:21:03.562 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10552], 00:21:03.562 | 70.00th=[11076], 80.00th=[13042], 90.00th=[15533], 95.00th=[16909], 00:21:03.562 | 99.00th=[18744], 99.50th=[20579], 99.90th=[21103], 99.95th=[21365], 00:21:03.562 | 99.99th=[21627] 00:21:03.563 write: IOPS=6342, BW=24.8MiB/s (26.0MB/s)(25.1MiB/1015msec); 0 zone resets 00:21:03.563 slat (nsec): min=1632, max=7077.4k, avg=64889.84, stdev=221928.55 00:21:03.563 clat (usec): min=1426, max=35951, avg=9545.82, stdev=3698.75 00:21:03.563 lat (usec): min=1436, max=35954, avg=9610.71, stdev=3719.90 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 2835], 5.00th=[ 4359], 10.00th=[ 5932], 20.00th=[ 8455], 00:21:03.563 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:21:03.563 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:21:03.563 | 99.00th=[26870], 99.50th=[30540], 99.90th=[34866], 99.95th=[35914], 00:21:03.563 | 99.99th=[35914] 00:21:03.563 bw ( KiB/s): min=24968, max=25520, per=24.96%, avg=25244.00, stdev=390.32, samples=2 00:21:03.563 iops : min= 6242, max= 6380, avg=6311.00, stdev=97.58, samples=2 00:21:03.563 lat (msec) : 2=0.14%, 4=2.27%, 10=56.94%, 20=38.70%, 50=1.95% 00:21:03.563 cpu : usr=3.75%, sys=5.23%, ctx=887, majf=0, minf=1 00:21:03.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:03.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.563 issued rwts: total=6144,6438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.563 job1: (groupid=0, jobs=1): err= 0: pid=543540: Fri Jun 7 14:23:26 2024 00:21:03.563 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1005msec) 00:21:03.563 slat (nsec): min=902, max=9441.3k, avg=69515.38, stdev=480938.26 00:21:03.563 clat (usec): min=1824, max=30931, avg=8749.19, stdev=2589.80 00:21:03.563 lat (usec): min=4288, max=30933, avg=8818.71, stdev=2633.95 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7439], 00:21:03.563 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:21:03.563 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[10552], 95.00th=[12911], 00:21:03.563 | 99.00th=[20317], 99.50th=[25560], 99.90th=[30016], 99.95th=[30802], 00:21:03.563 | 99.99th=[30802] 00:21:03.563 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:21:03.563 slat (nsec): min=1583, max=6538.0k, avg=75509.80, stdev=428344.13 00:21:03.563 clat (usec): min=2391, max=33814, avg=10351.48, stdev=5924.87 00:21:03.563 lat (usec): min=2399, max=33820, avg=10426.99, stdev=5969.24 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 3982], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 6783], 00:21:03.563 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:21:03.563 | 70.00th=[ 8717], 80.00th=[13960], 90.00th=[19006], 95.00th=[25035], 00:21:03.563 | 99.00th=[32113], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:21:03.563 | 99.99th=[33817] 00:21:03.563 bw ( KiB/s): min=23504, max=29744, per=26.33%, avg=26624.00, stdev=4412.35, samples=2 00:21:03.563 iops : min= 5876, max= 7436, avg=6656.00, stdev=1103.09, samples=2 00:21:03.563 lat (msec) : 2=0.01%, 4=0.53%, 10=76.89%, 20=17.47%, 50=5.11% 00:21:03.563 cpu : usr=4.88%, sys=6.67%, ctx=499, majf=0, minf=1 00:21:03.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:03.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.563 issued rwts: total=6643,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.563 job2: (groupid=0, jobs=1): err= 0: pid=543541: Fri Jun 7 14:23:26 2024 00:21:03.563 read: IOPS=6508, BW=25.4MiB/s (26.7MB/s)(25.5MiB/1002msec) 00:21:03.563 slat (nsec): min=926, max=13058k, avg=78193.15, stdev=531133.69 00:21:03.563 clat (usec): min=1097, max=36234, avg=9889.61, stdev=3775.95 00:21:03.563 lat (usec): min=4649, max=36260, avg=9967.80, stdev=3821.95 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 5342], 5.00th=[ 6718], 10.00th=[ 7570], 20.00th=[ 7963], 00:21:03.563 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:21:03.563 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11863], 95.00th=[16450], 00:21:03.563 | 99.00th=[27132], 99.50th=[30278], 99.90th=[33817], 99.95th=[35914], 00:21:03.563 | 99.99th=[36439] 00:21:03.563 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:21:03.563 slat (nsec): min=1579, max=7138.7k, avg=69326.44, stdev=371135.43 00:21:03.563 clat (usec): min=1182, max=33985, avg=9379.47, stdev=2974.63 00:21:03.563 lat (usec): min=1192, max=34583, avg=9448.80, stdev=2997.54 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 5145], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 7701], 00:21:03.563 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9241], 00:21:03.563 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[13829], 95.00th=[14353], 00:21:03.563 | 99.00th=[23725], 99.50th=[27395], 99.90th=[29230], 99.95th=[29492], 00:21:03.563 | 99.99th=[33817] 00:21:03.563 bw ( KiB/s): min=24576, max=28672, per=26.33%, avg=26624.00, stdev=2896.31, samples=2 00:21:03.563 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:21:03.563 lat (msec) : 2=0.02%, 10=73.86%, 20=23.78%, 50=2.34% 00:21:03.563 cpu : usr=4.60%, sys=5.00%, ctx=743, majf=0, minf=1 00:21:03.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:21:03.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.563 issued rwts: total=6522,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.563 job3: (groupid=0, jobs=1): err= 0: pid=543542: Fri Jun 7 14:23:26 2024 00:21:03.563 read: IOPS=5548, BW=21.7MiB/s (22.7MB/s)(22.0MiB/1015msec) 00:21:03.563 slat (nsec): min=887, max=26672k, avg=88076.88, stdev=661978.17 00:21:03.563 clat (usec): min=4635, max=48038, avg=11256.14, stdev=6368.37 00:21:03.563 lat (usec): min=4641, max=48064, avg=11344.22, stdev=6422.36 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 4883], 5.00th=[ 6849], 10.00th=[ 7439], 20.00th=[ 7898], 00:21:03.563 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:21:03.563 | 70.00th=[10290], 80.00th=[10814], 90.00th=[21365], 95.00th=[28705], 00:21:03.563 | 99.00th=[34866], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:21:03.563 | 99.99th=[47973] 00:21:03.563 write: IOPS=5821, BW=22.7MiB/s (23.8MB/s)(23.1MiB/1015msec); 0 zone resets 00:21:03.563 slat (nsec): min=1523, max=9500.0k, avg=81055.64, stdev=449937.05 00:21:03.563 clat (usec): min=1121, max=66649, avg=11059.93, stdev=8866.79 00:21:03.563 lat (usec): min=1131, max=66656, avg=11140.98, stdev=8925.31 00:21:03.563 clat percentiles (usec): 00:21:03.563 | 1.00th=[ 5145], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 7635], 00:21:03.563 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 8979], 00:21:03.563 | 70.00th=[ 9372], 80.00th=[10683], 90.00th=[16188], 95.00th=[24773], 00:21:03.563 | 99.00th=[61080], 99.50th=[63701], 99.90th=[66847], 99.95th=[66847], 00:21:03.563 | 99.99th=[66847] 00:21:03.563 bw ( KiB/s): min=21680, max=24576, per=22.87%, avg=23128.00, stdev=2047.78, samples=2 00:21:03.563 iops : min= 5420, max= 6144, avg=5782.00, stdev=511.95, samples=2 00:21:03.563 lat (msec) : 2=0.07%, 10=71.16%, 20=19.63%, 50=8.12%, 100=1.03% 00:21:03.563 cpu : usr=3.35%, sys=5.03%, ctx=728, majf=0, minf=1 00:21:03.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:03.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:03.563 issued rwts: total=5632,5909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.563 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:03.563 00:21:03.563 Run status group 0 (all jobs): 00:21:03.563 READ: bw=96.0MiB/s (101MB/s), 21.7MiB/s-25.8MiB/s (22.7MB/s-27.1MB/s), io=97.4MiB (102MB), run=1002-1015msec 00:21:03.563 WRITE: bw=98.7MiB/s (104MB/s), 22.7MiB/s-25.9MiB/s (23.8MB/s-27.2MB/s), io=100MiB (105MB), run=1002-1015msec 00:21:03.563 00:21:03.563 Disk stats (read/write): 00:21:03.563 nvme0n1: ios=5143/5632, merge=0/0, ticks=53297/49254, in_queue=102551, util=97.49% 00:21:03.563 nvme0n2: ios=5518/5632, merge=0/0, ticks=38564/42721, in_queue=81285, util=96.43% 00:21:03.563 nvme0n3: ios=5236/5632, merge=0/0, ticks=28809/26480, in_queue=55289, util=96.94% 00:21:03.563 nvme0n4: ios=4608/5087, merge=0/0, ticks=27288/28653, in_queue=55941, util=89.45% 00:21:03.563 14:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:21:03.563 [global] 00:21:03.563 thread=1 00:21:03.563 invalidate=1 00:21:03.563 rw=randwrite 00:21:03.563 time_based=1 00:21:03.563 runtime=1 00:21:03.563 ioengine=libaio 00:21:03.563 direct=1 00:21:03.563 bs=4096 00:21:03.563 iodepth=128 00:21:03.563 norandommap=0 00:21:03.563 numjobs=1 00:21:03.563 00:21:03.563 verify_dump=1 00:21:03.563 verify_backlog=512 00:21:03.563 verify_state_save=0 00:21:03.563 do_verify=1 00:21:03.563 verify=crc32c-intel 00:21:03.563 [job0] 00:21:03.563 filename=/dev/nvme0n1 00:21:03.563 [job1] 00:21:03.563 filename=/dev/nvme0n2 00:21:03.563 [job2] 00:21:03.563 filename=/dev/nvme0n3 00:21:03.563 [job3] 00:21:03.563 filename=/dev/nvme0n4 00:21:03.563 Could not set queue depth (nvme0n1) 00:21:03.563 Could not set queue depth (nvme0n2) 00:21:03.563 Could not set queue depth (nvme0n3) 00:21:03.563 Could not set queue depth (nvme0n4) 00:21:03.825 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.825 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.825 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.825 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:03.825 fio-3.35 00:21:03.825 Starting 4 threads 00:21:05.225 00:21:05.225 job0: (groupid=0, jobs=1): err= 0: pid=544057: Fri Jun 7 14:23:28 2024 00:21:05.225 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:21:05.225 slat (nsec): min=864, max=20855k, avg=91053.88, stdev=675125.55 00:21:05.225 clat (usec): min=2381, max=72482, avg=11290.81, stdev=10066.08 00:21:05.225 lat (usec): min=2415, max=72488, avg=11381.86, stdev=10155.76 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 3982], 5.00th=[ 5407], 10.00th=[ 5866], 20.00th=[ 6849], 00:21:05.225 | 30.00th=[ 7111], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8586], 00:21:05.225 | 70.00th=[ 9634], 80.00th=[12256], 90.00th=[18220], 95.00th=[32637], 00:21:05.225 | 99.00th=[59507], 99.50th=[65799], 99.90th=[70779], 99.95th=[70779], 00:21:05.225 | 99.99th=[72877] 00:21:05.225 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:21:05.225 slat (nsec): min=1445, max=17071k, avg=114150.71, stdev=777308.98 00:21:05.225 clat (usec): min=637, max=91036, avg=16428.57, stdev=19612.27 00:21:05.225 lat (usec): min=645, max=91041, avg=16542.72, stdev=19743.96 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 2343], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 5932], 00:21:05.225 | 30.00th=[ 6259], 40.00th=[ 7439], 50.00th=[ 8356], 60.00th=[ 9634], 00:21:05.225 | 70.00th=[13698], 80.00th=[18482], 90.00th=[41681], 95.00th=[73925], 00:21:05.225 | 99.00th=[84411], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:21:05.225 | 99.99th=[90702] 00:21:05.225 bw ( KiB/s): min=12288, max=24576, per=22.21%, avg=18432.00, stdev=8688.93, samples=2 00:21:05.225 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:21:05.225 lat (usec) : 750=0.03%, 1000=0.03% 00:21:05.225 lat (msec) : 2=0.42%, 4=1.61%, 10=65.00%, 20=19.73%, 50=7.57% 00:21:05.225 lat (msec) : 100=5.61% 00:21:05.225 cpu : usr=2.98%, sys=5.27%, ctx=374, majf=0, minf=1 00:21:05.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:05.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.225 issued rwts: total=4603,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.225 job1: (groupid=0, jobs=1): err= 0: pid=544059: Fri Jun 7 14:23:28 2024 00:21:05.225 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:21:05.225 slat (nsec): min=882, max=11395k, avg=92524.51, stdev=613066.80 00:21:05.225 clat (usec): min=3831, max=42999, avg=10859.06, stdev=4725.41 00:21:05.225 lat (usec): min=3836, max=43007, avg=10951.58, stdev=4781.28 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7111], 00:21:05.225 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10814], 00:21:05.225 | 70.00th=[12125], 80.00th=[13829], 90.00th=[16057], 95.00th=[20317], 00:21:05.225 | 99.00th=[25822], 99.50th=[32637], 99.90th=[43254], 99.95th=[43254], 00:21:05.225 | 99.99th=[43254] 00:21:05.225 write: IOPS=4833, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1006msec); 0 zone resets 00:21:05.225 slat (nsec): min=1469, max=7335.7k, avg=110685.98, stdev=591353.13 00:21:05.225 clat (usec): min=404, max=80280, avg=15903.11, stdev=16420.60 00:21:05.225 lat (usec): min=414, max=80289, avg=16013.80, stdev=16521.09 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 938], 5.00th=[ 2114], 10.00th=[ 4621], 20.00th=[ 6325], 00:21:05.225 | 30.00th=[ 7373], 40.00th=[ 8979], 50.00th=[10814], 60.00th=[11863], 00:21:05.225 | 70.00th=[12649], 80.00th=[17171], 90.00th=[39584], 95.00th=[61080], 00:21:05.225 | 99.00th=[73925], 99.50th=[77071], 99.90th=[80217], 99.95th=[80217], 00:21:05.225 | 99.99th=[80217] 00:21:05.225 bw ( KiB/s): min=17992, max=19888, per=22.82%, avg=18940.00, stdev=1340.67, samples=2 00:21:05.225 iops : min= 4498, max= 4972, avg=4735.00, stdev=335.17, samples=2 00:21:05.225 lat (usec) : 500=0.04%, 750=0.23%, 1000=0.37% 00:21:05.225 lat (msec) : 2=1.46%, 4=3.00%, 10=43.29%, 20=39.66%, 50=8.33% 00:21:05.225 lat (msec) : 100=3.61% 00:21:05.225 cpu : usr=4.08%, sys=4.18%, ctx=484, majf=0, minf=1 00:21:05.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:21:05.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.225 issued rwts: total=4608,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.225 job2: (groupid=0, jobs=1): err= 0: pid=544061: Fri Jun 7 14:23:28 2024 00:21:05.225 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec) 00:21:05.225 slat (nsec): min=903, max=12364k, avg=74786.44, stdev=609531.38 00:21:05.225 clat (usec): min=963, max=72817, avg=10629.89, stdev=6793.86 00:21:05.225 lat (usec): min=966, max=72821, avg=10704.67, stdev=6819.23 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 1663], 5.00th=[ 2376], 10.00th=[ 5342], 20.00th=[ 7832], 00:21:05.225 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9896], 60.00th=[10421], 00:21:05.225 | 70.00th=[10683], 80.00th=[12518], 90.00th=[14615], 95.00th=[19006], 00:21:05.225 | 99.00th=[36963], 99.50th=[60556], 99.90th=[71828], 99.95th=[72877], 00:21:05.225 | 99.99th=[72877] 00:21:05.225 write: IOPS=6330, BW=24.7MiB/s (25.9MB/s)(25.0MiB/1012msec); 0 zone resets 00:21:05.225 slat (nsec): min=1508, max=11519k, avg=67573.97, stdev=498579.11 00:21:05.225 clat (usec): min=457, max=67501, avg=9862.98, stdev=7075.26 00:21:05.225 lat (usec): min=662, max=67512, avg=9930.56, stdev=7103.64 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 1172], 5.00th=[ 2212], 10.00th=[ 3720], 20.00th=[ 5800], 00:21:05.225 | 30.00th=[ 6783], 40.00th=[ 7308], 50.00th=[ 8717], 60.00th=[ 9503], 00:21:05.225 | 70.00th=[10552], 80.00th=[12387], 90.00th=[16188], 95.00th=[22152], 00:21:05.225 | 99.00th=[44303], 99.50th=[48497], 99.90th=[59507], 99.95th=[59507], 00:21:05.225 | 99.99th=[67634] 00:21:05.225 bw ( KiB/s): min=23168, max=27056, per=30.26%, avg=25112.00, stdev=2749.23, samples=2 00:21:05.225 iops : min= 5792, max= 6764, avg=6278.00, stdev=687.31, samples=2 00:21:05.225 lat (usec) : 500=0.01%, 750=0.13%, 1000=0.25% 00:21:05.225 lat (msec) : 2=2.69%, 4=6.42%, 10=49.39%, 20=36.08%, 50=4.54% 00:21:05.225 lat (msec) : 100=0.49% 00:21:05.225 cpu : usr=3.96%, sys=6.82%, ctx=448, majf=0, minf=1 00:21:05.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:05.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.225 issued rwts: total=6144,6406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.225 job3: (groupid=0, jobs=1): err= 0: pid=544062: Fri Jun 7 14:23:28 2024 00:21:05.225 read: IOPS=4924, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1004msec) 00:21:05.225 slat (nsec): min=907, max=15234k, avg=115436.01, stdev=709680.96 00:21:05.225 clat (usec): min=1885, max=62529, avg=13832.04, stdev=10277.11 00:21:05.225 lat (usec): min=5280, max=62553, avg=13947.48, stdev=10362.24 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8160], 00:21:05.225 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10683], 00:21:05.225 | 70.00th=[12125], 80.00th=[15401], 90.00th=[30278], 95.00th=[38536], 00:21:05.225 | 99.00th=[52691], 99.50th=[52691], 99.90th=[58459], 99.95th=[58983], 00:21:05.225 | 99.99th=[62653] 00:21:05.225 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:21:05.225 slat (nsec): min=1531, max=8934.7k, avg=79734.29, stdev=468249.55 00:21:05.225 clat (usec): min=4238, max=45354, avg=11408.28, stdev=7075.78 00:21:05.225 lat (usec): min=4242, max=45361, avg=11488.01, stdev=7113.73 00:21:05.225 clat percentiles (usec): 00:21:05.225 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 6915], 00:21:05.225 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[ 9896], 00:21:05.225 | 70.00th=[11076], 80.00th=[13435], 90.00th=[21365], 95.00th=[26084], 00:21:05.225 | 99.00th=[41157], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:21:05.225 | 99.99th=[45351] 00:21:05.225 bw ( KiB/s): min=12288, max=28672, per=24.68%, avg=20480.00, stdev=11585.24, samples=2 00:21:05.225 iops : min= 3072, max= 7168, avg=5120.00, stdev=2896.31, samples=2 00:21:05.225 lat (msec) : 2=0.01%, 10=58.40%, 20=28.70%, 50=11.62%, 100=1.28% 00:21:05.225 cpu : usr=2.69%, sys=3.99%, ctx=573, majf=0, minf=1 00:21:05.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:05.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:05.225 issued rwts: total=4944,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:05.225 00:21:05.225 Run status group 0 (all jobs): 00:21:05.225 READ: bw=78.4MiB/s (82.2MB/s), 17.9MiB/s-23.7MiB/s (18.7MB/s-24.9MB/s), io=79.3MiB (83.1MB), run=1004-1012msec 00:21:05.225 WRITE: bw=81.0MiB/s (85.0MB/s), 17.9MiB/s-24.7MiB/s (18.7MB/s-25.9MB/s), io=82.0MiB (86.0MB), run=1004-1012msec 00:21:05.225 00:21:05.225 Disk stats (read/write): 00:21:05.225 nvme0n1: ios=4111/4263, merge=0/0, ticks=36227/52514, in_queue=88741, util=88.38% 00:21:05.225 nvme0n2: ios=3215/3584, merge=0/0, ticks=33877/58775, in_queue=92652, util=86.14% 00:21:05.225 nvme0n3: ios=5120/5511, merge=0/0, ticks=52869/46442, in_queue=99311, util=88.09% 00:21:05.225 nvme0n4: ios=4431/4608, merge=0/0, ticks=17843/13061, in_queue=30904, util=96.91% 00:21:05.225 14:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:21:05.225 14:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=544393 00:21:05.225 14:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:21:05.225 14:23:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:21:05.225 [global] 00:21:05.225 thread=1 00:21:05.225 invalidate=1 00:21:05.226 rw=read 00:21:05.226 time_based=1 00:21:05.226 runtime=10 00:21:05.226 ioengine=libaio 00:21:05.226 direct=1 00:21:05.226 bs=4096 00:21:05.226 iodepth=1 00:21:05.226 norandommap=1 00:21:05.226 numjobs=1 00:21:05.226 00:21:05.226 [job0] 00:21:05.226 filename=/dev/nvme0n1 00:21:05.226 [job1] 00:21:05.226 filename=/dev/nvme0n2 00:21:05.226 [job2] 00:21:05.226 filename=/dev/nvme0n3 00:21:05.226 [job3] 00:21:05.226 filename=/dev/nvme0n4 00:21:05.226 Could not set queue depth (nvme0n1) 00:21:05.226 Could not set queue depth (nvme0n2) 00:21:05.226 Could not set queue depth (nvme0n3) 00:21:05.226 Could not set queue depth (nvme0n4) 00:21:05.485 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:05.485 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:05.485 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:05.485 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:05.485 fio-3.35 00:21:05.485 Starting 4 threads 00:21:08.031 14:23:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:21:08.031 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=434176, buflen=4096 00:21:08.031 fio: pid=544590, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.031 14:23:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:21:08.292 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=303104, buflen=4096 00:21:08.292 fio: pid=544589, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.292 14:23:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.292 14:23:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:21:08.554 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9940992, buflen=4096 00:21:08.554 fio: pid=544587, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.554 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.554 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:21:08.554 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.554 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:08.554 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13922304, buflen=4096 00:21:08.554 fio: pid=544588, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:21:08.814 00:21:08.815 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=544587: Fri Jun 7 14:23:32 2024 00:21:08.815 read: IOPS=828, BW=3312KiB/s (3392kB/s)(9708KiB/2931msec) 00:21:08.815 slat (usec): min=7, max=15773, avg=43.03, stdev=467.49 00:21:08.815 clat (usec): min=286, max=46678, avg=1156.66, stdev=2652.35 00:21:08.815 lat (usec): min=297, max=46705, avg=1195.22, stdev=2684.11 00:21:08.815 clat percentiles (usec): 00:21:08.815 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 758], 20.00th=[ 889], 00:21:08.815 | 30.00th=[ 947], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1045], 00:21:08.815 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:21:08.815 | 99.00th=[ 1287], 99.50th=[ 1385], 99.90th=[42206], 99.95th=[42730], 00:21:08.815 | 99.99th=[46924] 00:21:08.815 bw ( KiB/s): min= 2864, max= 4096, per=45.99%, avg=3545.60, stdev=497.62, samples=5 00:21:08.815 iops : min= 716, max= 1024, avg=886.40, stdev=124.41, samples=5 00:21:08.815 lat (usec) : 500=0.54%, 750=8.73%, 1000=34.93% 00:21:08.815 lat (msec) : 2=55.35%, 50=0.41% 00:21:08.815 cpu : usr=1.33%, sys=3.45%, ctx=2435, majf=0, minf=1 00:21:08.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.815 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=544588: Fri Jun 7 14:23:32 2024 00:21:08.815 read: IOPS=1090, BW=4362KiB/s (4467kB/s)(13.3MiB/3117msec) 00:21:08.815 slat (usec): min=5, max=15681, avg=37.58, stdev=433.14 00:21:08.815 clat (usec): min=232, max=42664, avg=873.62, stdev=2548.97 00:21:08.815 lat (usec): min=243, max=52838, avg=911.20, stdev=2635.57 00:21:08.815 clat percentiles (usec): 00:21:08.815 | 1.00th=[ 412], 5.00th=[ 498], 10.00th=[ 553], 20.00th=[ 611], 00:21:08.815 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 750], 00:21:08.815 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 865], 95.00th=[ 938], 00:21:08.815 | 99.00th=[ 1139], 99.50th=[ 1696], 99.90th=[42206], 99.95th=[42206], 00:21:08.815 | 99.99th=[42730] 00:21:08.815 bw ( KiB/s): min= 895, max= 5675, per=58.48%, avg=4507.00, stdev=1875.50, samples=6 00:21:08.815 iops : min= 223, max= 1418, avg=1126.50, stdev=469.07, samples=6 00:21:08.815 lat (usec) : 250=0.09%, 500=4.91%, 750=55.65%, 1000=35.56% 00:21:08.815 lat (msec) : 2=3.35%, 4=0.03%, 50=0.38% 00:21:08.815 cpu : usr=1.35%, sys=4.20%, ctx=3404, majf=0, minf=1 00:21:08.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 issued rwts: total=3400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.815 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=544589: Fri Jun 7 14:23:32 2024 00:21:08.815 read: IOPS=27, BW=107KiB/s (110kB/s)(296KiB/2764msec) 00:21:08.815 slat (usec): min=2, max=15564, avg=230.23, stdev=1794.62 00:21:08.815 clat (usec): min=469, max=43112, avg=37101.29, stdev=13601.93 00:21:08.815 lat (usec): min=471, max=57952, avg=37334.31, stdev=13805.77 00:21:08.815 clat percentiles (usec): 00:21:08.815 | 1.00th=[ 469], 5.00th=[ 701], 10.00th=[ 1090], 20.00th=[41681], 00:21:08.815 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:21:08.815 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:21:08.815 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:21:08.815 | 99.99th=[43254] 00:21:08.815 bw ( KiB/s): min= 96, max= 144, per=1.40%, avg=108.80, stdev=20.08, samples=5 00:21:08.815 iops : min= 24, max= 36, avg=27.20, stdev= 5.02, samples=5 00:21:08.815 lat (usec) : 500=1.33%, 750=5.33%, 1000=1.33% 00:21:08.815 lat (msec) : 2=4.00%, 50=86.67% 00:21:08.815 cpu : usr=0.11%, sys=0.00%, ctx=76, majf=0, minf=1 00:21:08.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.815 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=544590: Fri Jun 7 14:23:32 2024 00:21:08.815 read: IOPS=41, BW=163KiB/s (167kB/s)(424KiB/2601msec) 00:21:08.815 slat (nsec): min=7572, max=43166, avg=24625.26, stdev=3271.33 00:21:08.815 clat (usec): min=597, max=42829, avg=24497.25, stdev=20401.03 00:21:08.815 lat (usec): min=622, max=42853, avg=24521.87, stdev=20400.69 00:21:08.815 clat percentiles (usec): 00:21:08.815 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 857], 00:21:08.815 | 30.00th=[ 930], 40.00th=[ 1020], 50.00th=[41681], 60.00th=[41681], 00:21:08.815 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:08.815 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:21:08.815 | 99.99th=[42730] 00:21:08.815 bw ( KiB/s): min= 104, max= 216, per=2.13%, avg=164.80, stdev=41.41, samples=5 00:21:08.815 iops : min= 26, max= 54, avg=41.20, stdev=10.35, samples=5 00:21:08.815 lat (usec) : 750=9.35%, 1000=28.97% 00:21:08.815 lat (msec) : 2=3.74%, 50=57.01% 00:21:08.815 cpu : usr=0.04%, sys=0.12%, ctx=107, majf=0, minf=2 00:21:08.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:08.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:08.815 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:08.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:08.815 00:21:08.815 Run status group 0 (all jobs): 00:21:08.815 READ: bw=7707KiB/s (7892kB/s), 107KiB/s-4362KiB/s (110kB/s-4467kB/s), io=23.5MiB (24.6MB), run=2601-3117msec 00:21:08.815 00:21:08.815 Disk stats (read/write): 00:21:08.815 nvme0n1: ios=2455/0, merge=0/0, ticks=3044/0, in_queue=3044, util=99.27% 00:21:08.815 nvme0n2: ios=3398/0, merge=0/0, ticks=2585/0, in_queue=2585, util=94.24% 00:21:08.815 nvme0n3: ios=70/0, merge=0/0, ticks=2578/0, in_queue=2578, util=96.03% 00:21:08.815 nvme0n4: ios=105/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.42% 00:21:08.815 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:08.815 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:09.077 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:09.077 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:09.077 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:09.077 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:09.338 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:09.338 14:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 544393 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:09.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:09.599 nvmf hotplug test: fio failed as expected 00:21:09.599 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.861 rmmod nvme_tcp 00:21:09.861 rmmod nvme_fabrics 00:21:09.861 rmmod nvme_keyring 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 540891 ']' 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 540891 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 540891 ']' 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 540891 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 540891 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 540891' 00:21:09.861 killing process with pid 540891 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 540891 00:21:09.861 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 540891 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.122 14:23:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.035 14:23:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.035 00:21:12.035 real 0m29.215s 00:21:12.035 user 2m36.119s 00:21:12.035 sys 0m9.680s 00:21:12.035 14:23:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:12.035 14:23:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.035 ************************************ 00:21:12.035 END TEST nvmf_fio_target 00:21:12.035 ************************************ 00:21:12.035 14:23:35 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:12.035 14:23:35 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:12.035 14:23:35 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:12.035 14:23:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.295 ************************************ 00:21:12.295 START TEST nvmf_bdevio 00:21:12.295 ************************************ 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:12.295 * Looking for test storage... 00:21:12.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.295 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.296 14:23:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:20.441 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:20.441 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.441 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:20.441 Found net devices under 0000:31:00.0: cvl_0_0 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:20.442 Found net devices under 0000:31:00.1: cvl_0_1 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:21:20.442 00:21:20.442 --- 10.0.0.2 ping statistics --- 00:21:20.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.442 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:21:20.442 00:21:20.442 --- 10.0.0.1 ping statistics --- 00:21:20.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.442 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=550153 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 550153 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 550153 ']' 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:20.442 14:23:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:20.442 [2024-06-07 14:23:43.990351] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:21:20.442 [2024-06-07 14:23:43.990419] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.442 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.442 [2024-06-07 14:23:44.086769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.704 [2024-06-07 14:23:44.136888] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.704 [2024-06-07 14:23:44.136948] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.704 [2024-06-07 14:23:44.136956] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.704 [2024-06-07 14:23:44.136963] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.704 [2024-06-07 14:23:44.136969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.704 [2024-06-07 14:23:44.137132] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:21:20.704 [2024-06-07 14:23:44.137265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:21:20.704 [2024-06-07 14:23:44.137463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:21:20.704 [2024-06-07 14:23:44.137464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 [2024-06-07 14:23:44.854645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 Malloc0 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.275 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:21.275 [2024-06-07 14:23:44.919625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.536 { 00:21:21.536 "params": { 00:21:21.536 "name": "Nvme$subsystem", 00:21:21.536 "trtype": "$TEST_TRANSPORT", 00:21:21.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.536 "adrfam": "ipv4", 00:21:21.536 "trsvcid": "$NVMF_PORT", 00:21:21.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.536 "hdgst": ${hdgst:-false}, 00:21:21.536 "ddgst": ${ddgst:-false} 00:21:21.536 }, 00:21:21.536 "method": "bdev_nvme_attach_controller" 00:21:21.536 } 00:21:21.536 EOF 00:21:21.536 )") 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:21:21.536 14:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:21.536 "params": { 00:21:21.536 "name": "Nvme1", 00:21:21.536 "trtype": "tcp", 00:21:21.536 "traddr": "10.0.0.2", 00:21:21.536 "adrfam": "ipv4", 00:21:21.536 "trsvcid": "4420", 00:21:21.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.536 "hdgst": false, 00:21:21.536 "ddgst": false 00:21:21.536 }, 00:21:21.536 "method": "bdev_nvme_attach_controller" 00:21:21.536 }' 00:21:21.536 [2024-06-07 14:23:44.975111] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:21:21.536 [2024-06-07 14:23:44.975172] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid550322 ] 00:21:21.536 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.536 [2024-06-07 14:23:45.047448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:21.536 [2024-06-07 14:23:45.088496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.536 [2024-06-07 14:23:45.088704] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.536 [2024-06-07 14:23:45.088708] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.842 I/O targets: 00:21:21.842 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:21.842 00:21:21.842 00:21:21.842 CUnit - A unit testing framework for C - Version 2.1-3 00:21:21.842 http://cunit.sourceforge.net/ 00:21:21.842 00:21:21.842 00:21:21.842 Suite: bdevio tests on: Nvme1n1 00:21:21.842 Test: blockdev write read block ...passed 00:21:22.107 Test: blockdev write zeroes read block ...passed 00:21:22.107 Test: blockdev write zeroes read no split ...passed 00:21:22.107 Test: blockdev write zeroes read split ...passed 00:21:22.107 Test: blockdev write zeroes read split partial ...passed 00:21:22.107 Test: blockdev reset ...[2024-06-07 14:23:45.558541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.107 [2024-06-07 14:23:45.558610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180aa40 (9): Bad file descriptor 00:21:22.107 [2024-06-07 14:23:45.609877] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:22.107 passed 00:21:22.107 Test: blockdev write read 8 blocks ...passed 00:21:22.107 Test: blockdev write read size > 128k ...passed 00:21:22.107 Test: blockdev write read invalid size ...passed 00:21:22.107 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:22.107 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:22.107 Test: blockdev write read max offset ...passed 00:21:22.107 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:22.107 Test: blockdev writev readv 8 blocks ...passed 00:21:22.107 Test: blockdev writev readv 30 x 1block ...passed 00:21:22.368 Test: blockdev writev readv block ...passed 00:21:22.368 Test: blockdev writev readv size > 128k ...passed 00:21:22.368 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:22.368 Test: blockdev comparev and writev ...[2024-06-07 14:23:45.836332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.836356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.836366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.836373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.836882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.836890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.836900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.836905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.837375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.837382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.837392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.837397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.837861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.837869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.837878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.368 [2024-06-07 14:23:45.837883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.368 passed 00:21:22.368 Test: blockdev nvme passthru rw ...passed 00:21:22.368 Test: blockdev nvme passthru vendor specific ...[2024-06-07 14:23:45.924052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.368 [2024-06-07 14:23:45.924064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.924414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.368 [2024-06-07 14:23:45.924421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.924774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.368 [2024-06-07 14:23:45.924780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.368 [2024-06-07 14:23:45.925094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.368 [2024-06-07 14:23:45.925100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.368 passed 00:21:22.368 Test: blockdev nvme admin passthru ...passed 00:21:22.368 Test: blockdev copy ...passed 00:21:22.368 00:21:22.368 Run Summary: Type Total Ran Passed Failed Inactive 00:21:22.368 suites 1 1 n/a 0 0 00:21:22.368 tests 23 23 23 0 0 00:21:22.368 asserts 152 152 152 0 n/a 00:21:22.368 00:21:22.368 Elapsed time = 1.192 seconds 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.629 rmmod nvme_tcp 00:21:22.629 rmmod nvme_fabrics 00:21:22.629 rmmod nvme_keyring 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 550153 ']' 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 550153 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 550153 ']' 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 550153 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 550153 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 550153' 00:21:22.629 killing process with pid 550153 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 550153 00:21:22.629 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 550153 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.890 14:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.436 14:23:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.436 00:21:25.436 real 0m12.774s 00:21:25.436 user 0m13.642s 00:21:25.436 sys 0m6.539s 00:21:25.436 14:23:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:25.436 14:23:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:21:25.436 ************************************ 00:21:25.436 END TEST nvmf_bdevio 00:21:25.436 ************************************ 00:21:25.436 14:23:48 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:25.436 14:23:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:25.436 14:23:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:25.436 14:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:25.436 ************************************ 00:21:25.436 START TEST nvmf_auth_target 00:21:25.436 ************************************ 00:21:25.436 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:25.436 * Looking for test storage... 00:21:25.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.436 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.437 14:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.572 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:33.573 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:33.573 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:33.573 Found net devices under 0000:31:00.0: cvl_0_0 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:33.573 Found net devices under 0000:31:00.1: cvl_0_1 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:33.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:21:33.573 00:21:33.573 --- 10.0.0.2 ping statistics --- 00:21:33.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.573 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:21:33.573 00:21:33.573 --- 10.0.0.1 ping statistics --- 00:21:33.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.573 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=555335 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 555335 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 555335 ']' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.573 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:33.574 14:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.834 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:33.834 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:33.834 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.834 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:21:33.834 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=555367 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=199da0d50df88d6f7f1b2297cdc520b90bf0de2f8cd2555d 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wWP 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 199da0d50df88d6f7f1b2297cdc520b90bf0de2f8cd2555d 0 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 199da0d50df88d6f7f1b2297cdc520b90bf0de2f8cd2555d 0 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=199da0d50df88d6f7f1b2297cdc520b90bf0de2f8cd2555d 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wWP 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wWP 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.wWP 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a05a086c02bc835b16b72d1e6c0ad914e7875fbac822e361be80856c2883d4f 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9ZY 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a05a086c02bc835b16b72d1e6c0ad914e7875fbac822e361be80856c2883d4f 3 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a05a086c02bc835b16b72d1e6c0ad914e7875fbac822e361be80856c2883d4f 3 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a05a086c02bc835b16b72d1e6c0ad914e7875fbac822e361be80856c2883d4f 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9ZY 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9ZY 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.9ZY 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=093941f0dd1ab0a84535e10fcb7e578e 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.j4W 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 093941f0dd1ab0a84535e10fcb7e578e 1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 093941f0dd1ab0a84535e10fcb7e578e 1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=093941f0dd1ab0a84535e10fcb7e578e 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.j4W 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.j4W 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.j4W 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.095 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c9f6e619b93329ff7ff563a9e8be4ea7993e8ab6002627e 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8rs 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c9f6e619b93329ff7ff563a9e8be4ea7993e8ab6002627e 2 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c9f6e619b93329ff7ff563a9e8be4ea7993e8ab6002627e 2 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c9f6e619b93329ff7ff563a9e8be4ea7993e8ab6002627e 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:34.096 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8rs 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8rs 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.8rs 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f18fba6bea9a042508b7fc5abc88d067e75fa597686d3b78 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FzM 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f18fba6bea9a042508b7fc5abc88d067e75fa597686d3b78 2 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f18fba6bea9a042508b7fc5abc88d067e75fa597686d3b78 2 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f18fba6bea9a042508b7fc5abc88d067e75fa597686d3b78 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FzM 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FzM 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.FzM 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=284da742ca0ac79f37d95b26a393e537 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.K7l 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 284da742ca0ac79f37d95b26a393e537 1 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 284da742ca0ac79f37d95b26a393e537 1 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=284da742ca0ac79f37d95b26a393e537 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.356 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.K7l 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.K7l 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.K7l 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d38f9c6f905a38a0b99f9b539953b879bc1e31cad5a4ac3f4b40938b6bc91661 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4P0 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d38f9c6f905a38a0b99f9b539953b879bc1e31cad5a4ac3f4b40938b6bc91661 3 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d38f9c6f905a38a0b99f9b539953b879bc1e31cad5a4ac3f4b40938b6bc91661 3 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d38f9c6f905a38a0b99f9b539953b879bc1e31cad5a4ac3f4b40938b6bc91661 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4P0 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4P0 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.4P0 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 555335 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 555335 ']' 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:34.357 14:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 555367 /var/tmp/host.sock 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 555367 ']' 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:34.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:34.618 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wWP 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.878 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wWP 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wWP 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.9ZY ]] 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ZY 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ZY 00:21:34.879 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9ZY 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.j4W 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.j4W 00:21:35.138 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.j4W 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.8rs ]] 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8rs 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8rs 00:21:35.399 14:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8rs 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FzM 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FzM 00:21:35.399 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FzM 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.K7l ]] 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K7l 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K7l 00:21:35.661 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K7l 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.4P0 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.4P0 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.4P0 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:35.922 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.189 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:21:36.189 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.190 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.451 00:21:36.451 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.451 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.451 14:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.451 { 00:21:36.451 "cntlid": 1, 00:21:36.451 "qid": 0, 00:21:36.451 "state": "enabled", 00:21:36.451 "listen_address": { 00:21:36.451 "trtype": "TCP", 00:21:36.451 "adrfam": "IPv4", 00:21:36.451 "traddr": "10.0.0.2", 00:21:36.451 "trsvcid": "4420" 00:21:36.451 }, 00:21:36.451 "peer_address": { 00:21:36.451 "trtype": "TCP", 00:21:36.451 "adrfam": "IPv4", 00:21:36.451 "traddr": "10.0.0.1", 00:21:36.451 "trsvcid": "60938" 00:21:36.451 }, 00:21:36.451 "auth": { 00:21:36.451 "state": "completed", 00:21:36.451 "digest": "sha256", 00:21:36.451 "dhgroup": "null" 00:21:36.451 } 00:21:36.451 } 00:21:36.451 ]' 00:21:36.451 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.711 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.971 14:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:37.543 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.803 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.063 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.063 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.063 { 00:21:38.063 "cntlid": 3, 00:21:38.063 "qid": 0, 00:21:38.063 "state": "enabled", 00:21:38.063 "listen_address": { 00:21:38.063 "trtype": "TCP", 00:21:38.063 "adrfam": "IPv4", 00:21:38.063 "traddr": "10.0.0.2", 00:21:38.063 "trsvcid": "4420" 00:21:38.063 }, 00:21:38.063 "peer_address": { 00:21:38.063 "trtype": "TCP", 00:21:38.063 "adrfam": "IPv4", 00:21:38.063 "traddr": "10.0.0.1", 00:21:38.063 "trsvcid": "60982" 00:21:38.063 }, 00:21:38.063 "auth": { 00:21:38.063 "state": "completed", 00:21:38.063 "digest": "sha256", 00:21:38.063 "dhgroup": "null" 00:21:38.063 } 00:21:38.063 } 00:21:38.063 ]' 00:21:38.064 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.323 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.324 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.584 14:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.154 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.415 14:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.690 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.690 { 00:21:39.690 "cntlid": 5, 00:21:39.690 "qid": 0, 00:21:39.690 "state": "enabled", 00:21:39.690 "listen_address": { 00:21:39.690 "trtype": "TCP", 00:21:39.690 "adrfam": "IPv4", 00:21:39.690 "traddr": "10.0.0.2", 00:21:39.690 "trsvcid": "4420" 00:21:39.690 }, 00:21:39.690 "peer_address": { 00:21:39.690 "trtype": "TCP", 00:21:39.690 "adrfam": "IPv4", 00:21:39.690 "traddr": "10.0.0.1", 00:21:39.690 "trsvcid": "58242" 00:21:39.690 }, 00:21:39.690 "auth": { 00:21:39.690 "state": "completed", 00:21:39.690 "digest": "sha256", 00:21:39.690 "dhgroup": "null" 00:21:39.690 } 00:21:39.690 } 00:21:39.690 ]' 00:21:39.690 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.950 14:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.889 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.150 00:21:41.150 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.150 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.150 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.410 { 00:21:41.410 "cntlid": 7, 00:21:41.410 "qid": 0, 00:21:41.410 "state": "enabled", 00:21:41.410 "listen_address": { 00:21:41.410 "trtype": "TCP", 00:21:41.410 "adrfam": "IPv4", 00:21:41.410 "traddr": "10.0.0.2", 00:21:41.410 "trsvcid": "4420" 00:21:41.410 }, 00:21:41.410 "peer_address": { 00:21:41.410 "trtype": "TCP", 00:21:41.410 "adrfam": "IPv4", 00:21:41.410 "traddr": "10.0.0.1", 00:21:41.410 "trsvcid": "58272" 00:21:41.410 }, 00:21:41.410 "auth": { 00:21:41.410 "state": "completed", 00:21:41.410 "digest": "sha256", 00:21:41.410 "dhgroup": "null" 00:21:41.410 } 00:21:41.410 } 00:21:41.410 ]' 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:41.410 14:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.410 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.410 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.410 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.670 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.280 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.281 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.281 14:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.540 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.801 00:21:42.801 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.801 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.801 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.061 { 00:21:43.061 "cntlid": 9, 00:21:43.061 "qid": 0, 00:21:43.061 "state": "enabled", 00:21:43.061 "listen_address": { 00:21:43.061 "trtype": "TCP", 00:21:43.061 "adrfam": "IPv4", 00:21:43.061 "traddr": "10.0.0.2", 00:21:43.061 "trsvcid": "4420" 00:21:43.061 }, 00:21:43.061 "peer_address": { 00:21:43.061 "trtype": "TCP", 00:21:43.061 "adrfam": "IPv4", 00:21:43.061 "traddr": "10.0.0.1", 00:21:43.061 "trsvcid": "58304" 00:21:43.061 }, 00:21:43.061 "auth": { 00:21:43.061 "state": "completed", 00:21:43.061 "digest": "sha256", 00:21:43.061 "dhgroup": "ffdhe2048" 00:21:43.061 } 00:21:43.061 } 00:21:43.061 ]' 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.061 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.322 14:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.894 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.156 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.417 00:21:44.418 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.418 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.418 14:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.679 { 00:21:44.679 "cntlid": 11, 00:21:44.679 "qid": 0, 00:21:44.679 "state": "enabled", 00:21:44.679 "listen_address": { 00:21:44.679 "trtype": "TCP", 00:21:44.679 "adrfam": "IPv4", 00:21:44.679 "traddr": "10.0.0.2", 00:21:44.679 "trsvcid": "4420" 00:21:44.679 }, 00:21:44.679 "peer_address": { 00:21:44.679 "trtype": "TCP", 00:21:44.679 "adrfam": "IPv4", 00:21:44.679 "traddr": "10.0.0.1", 00:21:44.679 "trsvcid": "58338" 00:21:44.679 }, 00:21:44.679 "auth": { 00:21:44.679 "state": "completed", 00:21:44.679 "digest": "sha256", 00:21:44.679 "dhgroup": "ffdhe2048" 00:21:44.679 } 00:21:44.679 } 00:21:44.679 ]' 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.679 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.940 14:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.512 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.773 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.035 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:46.035 { 00:21:46.035 "cntlid": 13, 00:21:46.035 "qid": 0, 00:21:46.035 "state": "enabled", 00:21:46.035 "listen_address": { 00:21:46.035 "trtype": "TCP", 00:21:46.035 "adrfam": "IPv4", 00:21:46.035 "traddr": "10.0.0.2", 00:21:46.035 "trsvcid": "4420" 00:21:46.035 }, 00:21:46.035 "peer_address": { 00:21:46.035 "trtype": "TCP", 00:21:46.035 "adrfam": "IPv4", 00:21:46.035 "traddr": "10.0.0.1", 00:21:46.035 "trsvcid": "58360" 00:21:46.035 }, 00:21:46.035 "auth": { 00:21:46.035 "state": "completed", 00:21:46.035 "digest": "sha256", 00:21:46.035 "dhgroup": "ffdhe2048" 00:21:46.035 } 00:21:46.035 } 00:21:46.035 ]' 00:21:46.035 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.296 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.558 14:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.129 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.390 14:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.650 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.650 { 00:21:47.650 "cntlid": 15, 00:21:47.650 "qid": 0, 00:21:47.650 "state": "enabled", 00:21:47.650 "listen_address": { 00:21:47.650 "trtype": "TCP", 00:21:47.650 "adrfam": "IPv4", 00:21:47.650 "traddr": "10.0.0.2", 00:21:47.650 "trsvcid": "4420" 00:21:47.650 }, 00:21:47.650 "peer_address": { 00:21:47.650 "trtype": "TCP", 00:21:47.650 "adrfam": "IPv4", 00:21:47.650 "traddr": "10.0.0.1", 00:21:47.650 "trsvcid": "58376" 00:21:47.650 }, 00:21:47.650 "auth": { 00:21:47.650 "state": "completed", 00:21:47.650 "digest": "sha256", 00:21:47.650 "dhgroup": "ffdhe2048" 00:21:47.650 } 00:21:47.650 } 00:21:47.650 ]' 00:21:47.650 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.910 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.171 14:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.744 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.005 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.266 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:49.266 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.266 { 00:21:49.266 "cntlid": 17, 00:21:49.266 "qid": 0, 00:21:49.266 "state": "enabled", 00:21:49.266 "listen_address": { 00:21:49.266 "trtype": "TCP", 00:21:49.266 "adrfam": "IPv4", 00:21:49.266 "traddr": "10.0.0.2", 00:21:49.266 "trsvcid": "4420" 00:21:49.266 }, 00:21:49.266 "peer_address": { 00:21:49.266 "trtype": "TCP", 00:21:49.266 "adrfam": "IPv4", 00:21:49.266 "traddr": "10.0.0.1", 00:21:49.266 "trsvcid": "51226" 00:21:49.266 }, 00:21:49.266 "auth": { 00:21:49.266 "state": "completed", 00:21:49.266 "digest": "sha256", 00:21:49.266 "dhgroup": "ffdhe3072" 00:21:49.266 } 00:21:49.267 } 00:21:49.267 ]' 00:21:49.267 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.527 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.527 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.527 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.527 14:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.527 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.527 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.527 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.527 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.470 14:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.470 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.731 00:21:50.731 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.731 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.731 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.993 { 00:21:50.993 "cntlid": 19, 00:21:50.993 "qid": 0, 00:21:50.993 "state": "enabled", 00:21:50.993 "listen_address": { 00:21:50.993 "trtype": "TCP", 00:21:50.993 "adrfam": "IPv4", 00:21:50.993 "traddr": "10.0.0.2", 00:21:50.993 "trsvcid": "4420" 00:21:50.993 }, 00:21:50.993 "peer_address": { 00:21:50.993 "trtype": "TCP", 00:21:50.993 "adrfam": "IPv4", 00:21:50.993 "traddr": "10.0.0.1", 00:21:50.993 "trsvcid": "51250" 00:21:50.993 }, 00:21:50.993 "auth": { 00:21:50.993 "state": "completed", 00:21:50.993 "digest": "sha256", 00:21:50.993 "dhgroup": "ffdhe3072" 00:21:50.993 } 00:21:50.993 } 00:21:50.993 ]' 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.993 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.254 14:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.196 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.457 00:21:52.457 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.457 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.457 14:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.457 { 00:21:52.457 "cntlid": 21, 00:21:52.457 "qid": 0, 00:21:52.457 "state": "enabled", 00:21:52.457 "listen_address": { 00:21:52.457 "trtype": "TCP", 00:21:52.457 "adrfam": "IPv4", 00:21:52.457 "traddr": "10.0.0.2", 00:21:52.457 "trsvcid": "4420" 00:21:52.457 }, 00:21:52.457 "peer_address": { 00:21:52.457 "trtype": "TCP", 00:21:52.457 "adrfam": "IPv4", 00:21:52.457 "traddr": "10.0.0.1", 00:21:52.457 "trsvcid": "51276" 00:21:52.457 }, 00:21:52.457 "auth": { 00:21:52.457 "state": "completed", 00:21:52.457 "digest": "sha256", 00:21:52.457 "dhgroup": "ffdhe3072" 00:21:52.457 } 00:21:52.457 } 00:21:52.457 ]' 00:21:52.457 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.719 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.980 14:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:53.552 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:53.813 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.814 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.814 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.814 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.814 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.075 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.075 { 00:21:54.075 "cntlid": 23, 00:21:54.075 "qid": 0, 00:21:54.075 "state": "enabled", 00:21:54.075 "listen_address": { 00:21:54.075 "trtype": "TCP", 00:21:54.075 "adrfam": "IPv4", 00:21:54.075 "traddr": "10.0.0.2", 00:21:54.075 "trsvcid": "4420" 00:21:54.075 }, 00:21:54.075 "peer_address": { 00:21:54.075 "trtype": "TCP", 00:21:54.075 "adrfam": "IPv4", 00:21:54.075 "traddr": "10.0.0.1", 00:21:54.075 "trsvcid": "51294" 00:21:54.075 }, 00:21:54.075 "auth": { 00:21:54.075 "state": "completed", 00:21:54.075 "digest": "sha256", 00:21:54.075 "dhgroup": "ffdhe3072" 00:21:54.075 } 00:21:54.075 } 00:21:54.075 ]' 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.075 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.335 14:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.274 14:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.535 00:21:55.535 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.535 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.535 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.796 { 00:21:55.796 "cntlid": 25, 00:21:55.796 "qid": 0, 00:21:55.796 "state": "enabled", 00:21:55.796 "listen_address": { 00:21:55.796 "trtype": "TCP", 00:21:55.796 "adrfam": "IPv4", 00:21:55.796 "traddr": "10.0.0.2", 00:21:55.796 "trsvcid": "4420" 00:21:55.796 }, 00:21:55.796 "peer_address": { 00:21:55.796 "trtype": "TCP", 00:21:55.796 "adrfam": "IPv4", 00:21:55.796 "traddr": "10.0.0.1", 00:21:55.796 "trsvcid": "51326" 00:21:55.796 }, 00:21:55.796 "auth": { 00:21:55.796 "state": "completed", 00:21:55.796 "digest": "sha256", 00:21:55.796 "dhgroup": "ffdhe4096" 00:21:55.796 } 00:21:55.796 } 00:21:55.796 ]' 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.796 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.057 14:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.028 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.288 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.288 { 00:21:57.288 "cntlid": 27, 00:21:57.288 "qid": 0, 00:21:57.288 "state": "enabled", 00:21:57.288 "listen_address": { 00:21:57.288 "trtype": "TCP", 00:21:57.288 "adrfam": "IPv4", 00:21:57.288 "traddr": "10.0.0.2", 00:21:57.288 "trsvcid": "4420" 00:21:57.288 }, 00:21:57.288 "peer_address": { 00:21:57.288 "trtype": "TCP", 00:21:57.288 "adrfam": "IPv4", 00:21:57.288 "traddr": "10.0.0.1", 00:21:57.288 "trsvcid": "51346" 00:21:57.288 }, 00:21:57.288 "auth": { 00:21:57.288 "state": "completed", 00:21:57.288 "digest": "sha256", 00:21:57.288 "dhgroup": "ffdhe4096" 00:21:57.288 } 00:21:57.288 } 00:21:57.288 ]' 00:21:57.288 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.548 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.548 14:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.548 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.548 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.548 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.548 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.548 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.807 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:58.377 14:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.638 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.898 00:21:58.898 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.898 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.898 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.159 { 00:21:59.159 "cntlid": 29, 00:21:59.159 "qid": 0, 00:21:59.159 "state": "enabled", 00:21:59.159 "listen_address": { 00:21:59.159 "trtype": "TCP", 00:21:59.159 "adrfam": "IPv4", 00:21:59.159 "traddr": "10.0.0.2", 00:21:59.159 "trsvcid": "4420" 00:21:59.159 }, 00:21:59.159 "peer_address": { 00:21:59.159 "trtype": "TCP", 00:21:59.159 "adrfam": "IPv4", 00:21:59.159 "traddr": "10.0.0.1", 00:21:59.159 "trsvcid": "56110" 00:21:59.159 }, 00:21:59.159 "auth": { 00:21:59.159 "state": "completed", 00:21:59.159 "digest": "sha256", 00:21:59.159 "dhgroup": "ffdhe4096" 00:21:59.159 } 00:21:59.159 } 00:21:59.159 ]' 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.159 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.421 14:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.991 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.253 14:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.513 00:22:00.513 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.513 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.513 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.774 { 00:22:00.774 "cntlid": 31, 00:22:00.774 "qid": 0, 00:22:00.774 "state": "enabled", 00:22:00.774 "listen_address": { 00:22:00.774 "trtype": "TCP", 00:22:00.774 "adrfam": "IPv4", 00:22:00.774 "traddr": "10.0.0.2", 00:22:00.774 "trsvcid": "4420" 00:22:00.774 }, 00:22:00.774 "peer_address": { 00:22:00.774 "trtype": "TCP", 00:22:00.774 "adrfam": "IPv4", 00:22:00.774 "traddr": "10.0.0.1", 00:22:00.774 "trsvcid": "56150" 00:22:00.774 }, 00:22:00.774 "auth": { 00:22:00.774 "state": "completed", 00:22:00.774 "digest": "sha256", 00:22:00.774 "dhgroup": "ffdhe4096" 00:22:00.774 } 00:22:00.774 } 00:22:00.774 ]' 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.774 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.035 14:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.608 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.964 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.225 00:22:02.225 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.225 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.225 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.485 { 00:22:02.485 "cntlid": 33, 00:22:02.485 "qid": 0, 00:22:02.485 "state": "enabled", 00:22:02.485 "listen_address": { 00:22:02.485 "trtype": "TCP", 00:22:02.485 "adrfam": "IPv4", 00:22:02.485 "traddr": "10.0.0.2", 00:22:02.485 "trsvcid": "4420" 00:22:02.485 }, 00:22:02.485 "peer_address": { 00:22:02.485 "trtype": "TCP", 00:22:02.485 "adrfam": "IPv4", 00:22:02.485 "traddr": "10.0.0.1", 00:22:02.485 "trsvcid": "56170" 00:22:02.485 }, 00:22:02.485 "auth": { 00:22:02.485 "state": "completed", 00:22:02.485 "digest": "sha256", 00:22:02.485 "dhgroup": "ffdhe6144" 00:22:02.485 } 00:22:02.485 } 00:22:02.485 ]' 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.485 14:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.485 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.485 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.485 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.485 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.485 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.746 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.317 14:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.679 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.970 00:22:03.970 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.970 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.971 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.971 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.971 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.971 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.971 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.231 { 00:22:04.231 "cntlid": 35, 00:22:04.231 "qid": 0, 00:22:04.231 "state": "enabled", 00:22:04.231 "listen_address": { 00:22:04.231 "trtype": "TCP", 00:22:04.231 "adrfam": "IPv4", 00:22:04.231 "traddr": "10.0.0.2", 00:22:04.231 "trsvcid": "4420" 00:22:04.231 }, 00:22:04.231 "peer_address": { 00:22:04.231 "trtype": "TCP", 00:22:04.231 "adrfam": "IPv4", 00:22:04.231 "traddr": "10.0.0.1", 00:22:04.231 "trsvcid": "56194" 00:22:04.231 }, 00:22:04.231 "auth": { 00:22:04.231 "state": "completed", 00:22:04.231 "digest": "sha256", 00:22:04.231 "dhgroup": "ffdhe6144" 00:22:04.231 } 00:22:04.231 } 00:22:04.231 ]' 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.231 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.492 14:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.063 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.325 14:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.585 00:22:05.585 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.585 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.585 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.846 { 00:22:05.846 "cntlid": 37, 00:22:05.846 "qid": 0, 00:22:05.846 "state": "enabled", 00:22:05.846 "listen_address": { 00:22:05.846 "trtype": "TCP", 00:22:05.846 "adrfam": "IPv4", 00:22:05.846 "traddr": "10.0.0.2", 00:22:05.846 "trsvcid": "4420" 00:22:05.846 }, 00:22:05.846 "peer_address": { 00:22:05.846 "trtype": "TCP", 00:22:05.846 "adrfam": "IPv4", 00:22:05.846 "traddr": "10.0.0.1", 00:22:05.846 "trsvcid": "56220" 00:22:05.846 }, 00:22:05.846 "auth": { 00:22:05.846 "state": "completed", 00:22:05.846 "digest": "sha256", 00:22:05.846 "dhgroup": "ffdhe6144" 00:22:05.846 } 00:22:05.846 } 00:22:05.846 ]' 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.846 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.108 14:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:06.679 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.680 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.940 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.201 00:22:07.201 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.201 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.201 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.463 { 00:22:07.463 "cntlid": 39, 00:22:07.463 "qid": 0, 00:22:07.463 "state": "enabled", 00:22:07.463 "listen_address": { 00:22:07.463 "trtype": "TCP", 00:22:07.463 "adrfam": "IPv4", 00:22:07.463 "traddr": "10.0.0.2", 00:22:07.463 "trsvcid": "4420" 00:22:07.463 }, 00:22:07.463 "peer_address": { 00:22:07.463 "trtype": "TCP", 00:22:07.463 "adrfam": "IPv4", 00:22:07.463 "traddr": "10.0.0.1", 00:22:07.463 "trsvcid": "56250" 00:22:07.463 }, 00:22:07.463 "auth": { 00:22:07.463 "state": "completed", 00:22:07.463 "digest": "sha256", 00:22:07.463 "dhgroup": "ffdhe6144" 00:22:07.463 } 00:22:07.463 } 00:22:07.463 ]' 00:22:07.463 14:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.463 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.463 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.463 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.463 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.723 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.723 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.723 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.723 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:08.663 14:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.663 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:08.663 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.663 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.663 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.663 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.664 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.235 00:22:09.235 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.235 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.235 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:09.496 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.496 { 00:22:09.496 "cntlid": 41, 00:22:09.496 "qid": 0, 00:22:09.496 "state": "enabled", 00:22:09.496 "listen_address": { 00:22:09.496 "trtype": "TCP", 00:22:09.496 "adrfam": "IPv4", 00:22:09.496 "traddr": "10.0.0.2", 00:22:09.496 "trsvcid": "4420" 00:22:09.496 }, 00:22:09.496 "peer_address": { 00:22:09.496 "trtype": "TCP", 00:22:09.496 "adrfam": "IPv4", 00:22:09.496 "traddr": "10.0.0.1", 00:22:09.496 "trsvcid": "39222" 00:22:09.496 }, 00:22:09.496 "auth": { 00:22:09.496 "state": "completed", 00:22:09.496 "digest": "sha256", 00:22:09.497 "dhgroup": "ffdhe8192" 00:22:09.497 } 00:22:09.497 } 00:22:09.497 ]' 00:22:09.497 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.497 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.497 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.497 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.497 14:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.497 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.497 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.497 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.758 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.328 14:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:10.589 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.590 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.160 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.160 { 00:22:11.160 "cntlid": 43, 00:22:11.160 "qid": 0, 00:22:11.160 "state": "enabled", 00:22:11.160 "listen_address": { 00:22:11.160 "trtype": "TCP", 00:22:11.160 "adrfam": "IPv4", 00:22:11.160 "traddr": "10.0.0.2", 00:22:11.160 "trsvcid": "4420" 00:22:11.160 }, 00:22:11.160 "peer_address": { 00:22:11.160 "trtype": "TCP", 00:22:11.160 "adrfam": "IPv4", 00:22:11.160 "traddr": "10.0.0.1", 00:22:11.160 "trsvcid": "39250" 00:22:11.160 }, 00:22:11.160 "auth": { 00:22:11.160 "state": "completed", 00:22:11.160 "digest": "sha256", 00:22:11.160 "dhgroup": "ffdhe8192" 00:22:11.160 } 00:22:11.160 } 00:22:11.160 ]' 00:22:11.160 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.420 14:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.421 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.362 14:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.933 00:22:12.933 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.933 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.933 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.194 { 00:22:13.194 "cntlid": 45, 00:22:13.194 "qid": 0, 00:22:13.194 "state": "enabled", 00:22:13.194 "listen_address": { 00:22:13.194 "trtype": "TCP", 00:22:13.194 "adrfam": "IPv4", 00:22:13.194 "traddr": "10.0.0.2", 00:22:13.194 "trsvcid": "4420" 00:22:13.194 }, 00:22:13.194 "peer_address": { 00:22:13.194 "trtype": "TCP", 00:22:13.194 "adrfam": "IPv4", 00:22:13.194 "traddr": "10.0.0.1", 00:22:13.194 "trsvcid": "39276" 00:22:13.194 }, 00:22:13.194 "auth": { 00:22:13.194 "state": "completed", 00:22:13.194 "digest": "sha256", 00:22:13.194 "dhgroup": "ffdhe8192" 00:22:13.194 } 00:22:13.194 } 00:22:13.194 ]' 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.194 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.455 14:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:14.026 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.026 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:14.026 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.026 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.287 14:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.857 00:22:14.857 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.857 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.857 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.118 { 00:22:15.118 "cntlid": 47, 00:22:15.118 "qid": 0, 00:22:15.118 "state": "enabled", 00:22:15.118 "listen_address": { 00:22:15.118 "trtype": "TCP", 00:22:15.118 "adrfam": "IPv4", 00:22:15.118 "traddr": "10.0.0.2", 00:22:15.118 "trsvcid": "4420" 00:22:15.118 }, 00:22:15.118 "peer_address": { 00:22:15.118 "trtype": "TCP", 00:22:15.118 "adrfam": "IPv4", 00:22:15.118 "traddr": "10.0.0.1", 00:22:15.118 "trsvcid": "39302" 00:22:15.118 }, 00:22:15.118 "auth": { 00:22:15.118 "state": "completed", 00:22:15.118 "digest": "sha256", 00:22:15.118 "dhgroup": "ffdhe8192" 00:22:15.118 } 00:22:15.118 } 00:22:15.118 ]' 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.118 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.380 14:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:15.950 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.211 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.472 00:22:16.472 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.472 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.472 14:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:16.472 { 00:22:16.472 "cntlid": 49, 00:22:16.472 "qid": 0, 00:22:16.472 "state": "enabled", 00:22:16.472 "listen_address": { 00:22:16.472 "trtype": "TCP", 00:22:16.472 "adrfam": "IPv4", 00:22:16.472 "traddr": "10.0.0.2", 00:22:16.472 "trsvcid": "4420" 00:22:16.472 }, 00:22:16.472 "peer_address": { 00:22:16.472 "trtype": "TCP", 00:22:16.472 "adrfam": "IPv4", 00:22:16.472 "traddr": "10.0.0.1", 00:22:16.472 "trsvcid": "39338" 00:22:16.472 }, 00:22:16.472 "auth": { 00:22:16.472 "state": "completed", 00:22:16.472 "digest": "sha384", 00:22:16.472 "dhgroup": "null" 00:22:16.472 } 00:22:16.472 } 00:22:16.472 ]' 00:22:16.472 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.732 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.992 14:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:17.562 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.822 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.082 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.082 { 00:22:18.082 "cntlid": 51, 00:22:18.082 "qid": 0, 00:22:18.082 "state": "enabled", 00:22:18.082 "listen_address": { 00:22:18.082 "trtype": "TCP", 00:22:18.082 "adrfam": "IPv4", 00:22:18.082 "traddr": "10.0.0.2", 00:22:18.082 "trsvcid": "4420" 00:22:18.082 }, 00:22:18.082 "peer_address": { 00:22:18.082 "trtype": "TCP", 00:22:18.082 "adrfam": "IPv4", 00:22:18.082 "traddr": "10.0.0.1", 00:22:18.082 "trsvcid": "39366" 00:22:18.082 }, 00:22:18.082 "auth": { 00:22:18.082 "state": "completed", 00:22:18.082 "digest": "sha384", 00:22:18.082 "dhgroup": "null" 00:22:18.082 } 00:22:18.082 } 00:22:18.082 ]' 00:22:18.082 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.342 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.601 14:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:19.182 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.442 14:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.442 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.702 { 00:22:19.702 "cntlid": 53, 00:22:19.702 "qid": 0, 00:22:19.702 "state": "enabled", 00:22:19.702 "listen_address": { 00:22:19.702 "trtype": "TCP", 00:22:19.702 "adrfam": "IPv4", 00:22:19.702 "traddr": "10.0.0.2", 00:22:19.702 "trsvcid": "4420" 00:22:19.702 }, 00:22:19.702 "peer_address": { 00:22:19.702 "trtype": "TCP", 00:22:19.702 "adrfam": "IPv4", 00:22:19.702 "traddr": "10.0.0.1", 00:22:19.702 "trsvcid": "40584" 00:22:19.702 }, 00:22:19.702 "auth": { 00:22:19.702 "state": "completed", 00:22:19.702 "digest": "sha384", 00:22:19.702 "dhgroup": "null" 00:22:19.702 } 00:22:19.702 } 00:22:19.702 ]' 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.702 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.962 14:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.964 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.237 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.237 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.237 { 00:22:21.237 "cntlid": 55, 00:22:21.237 "qid": 0, 00:22:21.237 "state": "enabled", 00:22:21.237 "listen_address": { 00:22:21.237 "trtype": "TCP", 00:22:21.237 "adrfam": "IPv4", 00:22:21.237 "traddr": "10.0.0.2", 00:22:21.237 "trsvcid": "4420" 00:22:21.237 }, 00:22:21.237 "peer_address": { 00:22:21.237 "trtype": "TCP", 00:22:21.237 "adrfam": "IPv4", 00:22:21.237 "traddr": "10.0.0.1", 00:22:21.237 "trsvcid": "40606" 00:22:21.237 }, 00:22:21.237 "auth": { 00:22:21.237 "state": "completed", 00:22:21.237 "digest": "sha384", 00:22:21.237 "dhgroup": "null" 00:22:21.237 } 00:22:21.237 } 00:22:21.237 ]' 00:22:21.238 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.498 14:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.759 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.331 14:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.592 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.853 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:22.853 { 00:22:22.853 "cntlid": 57, 00:22:22.853 "qid": 0, 00:22:22.853 "state": "enabled", 00:22:22.853 "listen_address": { 00:22:22.853 "trtype": "TCP", 00:22:22.853 "adrfam": "IPv4", 00:22:22.853 "traddr": "10.0.0.2", 00:22:22.853 "trsvcid": "4420" 00:22:22.853 }, 00:22:22.853 "peer_address": { 00:22:22.853 "trtype": "TCP", 00:22:22.853 "adrfam": "IPv4", 00:22:22.853 "traddr": "10.0.0.1", 00:22:22.853 "trsvcid": "40622" 00:22:22.853 }, 00:22:22.853 "auth": { 00:22:22.853 "state": "completed", 00:22:22.853 "digest": "sha384", 00:22:22.853 "dhgroup": "ffdhe2048" 00:22:22.853 } 00:22:22.853 } 00:22:22.853 ]' 00:22:22.853 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.114 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.375 14:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:23.946 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.206 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.466 00:22:24.466 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.466 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.466 14:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.466 { 00:22:24.466 "cntlid": 59, 00:22:24.466 "qid": 0, 00:22:24.466 "state": "enabled", 00:22:24.466 "listen_address": { 00:22:24.466 "trtype": "TCP", 00:22:24.466 "adrfam": "IPv4", 00:22:24.466 "traddr": "10.0.0.2", 00:22:24.466 "trsvcid": "4420" 00:22:24.466 }, 00:22:24.466 "peer_address": { 00:22:24.466 "trtype": "TCP", 00:22:24.466 "adrfam": "IPv4", 00:22:24.466 "traddr": "10.0.0.1", 00:22:24.466 "trsvcid": "40650" 00:22:24.466 }, 00:22:24.466 "auth": { 00:22:24.466 "state": "completed", 00:22:24.466 "digest": "sha384", 00:22:24.466 "dhgroup": "ffdhe2048" 00:22:24.466 } 00:22:24.466 } 00:22:24.466 ]' 00:22:24.466 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.727 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.988 14:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:25.562 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.823 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.823 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.085 { 00:22:26.085 "cntlid": 61, 00:22:26.085 "qid": 0, 00:22:26.085 "state": "enabled", 00:22:26.085 "listen_address": { 00:22:26.085 "trtype": "TCP", 00:22:26.085 "adrfam": "IPv4", 00:22:26.085 "traddr": "10.0.0.2", 00:22:26.085 "trsvcid": "4420" 00:22:26.085 }, 00:22:26.085 "peer_address": { 00:22:26.085 "trtype": "TCP", 00:22:26.085 "adrfam": "IPv4", 00:22:26.085 "traddr": "10.0.0.1", 00:22:26.085 "trsvcid": "40674" 00:22:26.085 }, 00:22:26.085 "auth": { 00:22:26.085 "state": "completed", 00:22:26.085 "digest": "sha384", 00:22:26.085 "dhgroup": "ffdhe2048" 00:22:26.085 } 00:22:26.085 } 00:22:26.085 ]' 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.085 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.346 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:26.346 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.346 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.346 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.346 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.347 14:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.288 14:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.549 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.549 14:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.809 14:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.809 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:27.809 { 00:22:27.809 "cntlid": 63, 00:22:27.809 "qid": 0, 00:22:27.809 "state": "enabled", 00:22:27.809 "listen_address": { 00:22:27.809 "trtype": "TCP", 00:22:27.809 "adrfam": "IPv4", 00:22:27.809 "traddr": "10.0.0.2", 00:22:27.809 "trsvcid": "4420" 00:22:27.809 }, 00:22:27.809 "peer_address": { 00:22:27.809 "trtype": "TCP", 00:22:27.809 "adrfam": "IPv4", 00:22:27.809 "traddr": "10.0.0.1", 00:22:27.809 "trsvcid": "40704" 00:22:27.809 }, 00:22:27.809 "auth": { 00:22:27.809 "state": "completed", 00:22:27.809 "digest": "sha384", 00:22:27.809 "dhgroup": "ffdhe2048" 00:22:27.809 } 00:22:27.809 } 00:22:27.809 ]' 00:22:27.809 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:27.809 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.810 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.071 14:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.645 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.956 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.230 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.230 { 00:22:29.230 "cntlid": 65, 00:22:29.230 "qid": 0, 00:22:29.230 "state": "enabled", 00:22:29.230 "listen_address": { 00:22:29.230 "trtype": "TCP", 00:22:29.230 "adrfam": "IPv4", 00:22:29.230 "traddr": "10.0.0.2", 00:22:29.230 "trsvcid": "4420" 00:22:29.230 }, 00:22:29.230 "peer_address": { 00:22:29.230 "trtype": "TCP", 00:22:29.230 "adrfam": "IPv4", 00:22:29.230 "traddr": "10.0.0.1", 00:22:29.230 "trsvcid": "49970" 00:22:29.230 }, 00:22:29.230 "auth": { 00:22:29.230 "state": "completed", 00:22:29.230 "digest": "sha384", 00:22:29.230 "dhgroup": "ffdhe3072" 00:22:29.230 } 00:22:29.230 } 00:22:29.230 ]' 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.230 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.501 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.501 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.501 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.501 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.501 14:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.501 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.442 14:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.704 00:22:30.704 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.704 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.704 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:30.965 { 00:22:30.965 "cntlid": 67, 00:22:30.965 "qid": 0, 00:22:30.965 "state": "enabled", 00:22:30.965 "listen_address": { 00:22:30.965 "trtype": "TCP", 00:22:30.965 "adrfam": "IPv4", 00:22:30.965 "traddr": "10.0.0.2", 00:22:30.965 "trsvcid": "4420" 00:22:30.965 }, 00:22:30.965 "peer_address": { 00:22:30.965 "trtype": "TCP", 00:22:30.965 "adrfam": "IPv4", 00:22:30.965 "traddr": "10.0.0.1", 00:22:30.965 "trsvcid": "49980" 00:22:30.965 }, 00:22:30.965 "auth": { 00:22:30.965 "state": "completed", 00:22:30.965 "digest": "sha384", 00:22:30.965 "dhgroup": "ffdhe3072" 00:22:30.965 } 00:22:30.965 } 00:22:30.965 ]' 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.965 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.226 14:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.800 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.062 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.323 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.323 { 00:22:32.323 "cntlid": 69, 00:22:32.323 "qid": 0, 00:22:32.323 "state": "enabled", 00:22:32.323 "listen_address": { 00:22:32.323 "trtype": "TCP", 00:22:32.323 "adrfam": "IPv4", 00:22:32.323 "traddr": "10.0.0.2", 00:22:32.323 "trsvcid": "4420" 00:22:32.323 }, 00:22:32.323 "peer_address": { 00:22:32.323 "trtype": "TCP", 00:22:32.323 "adrfam": "IPv4", 00:22:32.323 "traddr": "10.0.0.1", 00:22:32.323 "trsvcid": "50002" 00:22:32.323 }, 00:22:32.323 "auth": { 00:22:32.323 "state": "completed", 00:22:32.323 "digest": "sha384", 00:22:32.323 "dhgroup": "ffdhe3072" 00:22:32.323 } 00:22:32.323 } 00:22:32.323 ]' 00:22:32.323 14:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.584 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.845 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.416 14:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.677 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:33.938 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.938 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.938 { 00:22:33.938 "cntlid": 71, 00:22:33.938 "qid": 0, 00:22:33.938 "state": "enabled", 00:22:33.938 "listen_address": { 00:22:33.938 "trtype": "TCP", 00:22:33.938 "adrfam": "IPv4", 00:22:33.938 "traddr": "10.0.0.2", 00:22:33.938 "trsvcid": "4420" 00:22:33.938 }, 00:22:33.938 "peer_address": { 00:22:33.938 "trtype": "TCP", 00:22:33.938 "adrfam": "IPv4", 00:22:33.938 "traddr": "10.0.0.1", 00:22:33.938 "trsvcid": "50040" 00:22:33.938 }, 00:22:33.938 "auth": { 00:22:33.938 "state": "completed", 00:22:33.938 "digest": "sha384", 00:22:33.938 "dhgroup": "ffdhe3072" 00:22:33.938 } 00:22:33.939 } 00:22:33.939 ]' 00:22:33.939 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.199 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.460 14:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.152 14:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.414 00:22:35.414 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.414 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.414 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.675 { 00:22:35.675 "cntlid": 73, 00:22:35.675 "qid": 0, 00:22:35.675 "state": "enabled", 00:22:35.675 "listen_address": { 00:22:35.675 "trtype": "TCP", 00:22:35.675 "adrfam": "IPv4", 00:22:35.675 "traddr": "10.0.0.2", 00:22:35.675 "trsvcid": "4420" 00:22:35.675 }, 00:22:35.675 "peer_address": { 00:22:35.675 "trtype": "TCP", 00:22:35.675 "adrfam": "IPv4", 00:22:35.675 "traddr": "10.0.0.1", 00:22:35.675 "trsvcid": "50074" 00:22:35.675 }, 00:22:35.675 "auth": { 00:22:35.675 "state": "completed", 00:22:35.675 "digest": "sha384", 00:22:35.675 "dhgroup": "ffdhe4096" 00:22:35.675 } 00:22:35.675 } 00:22:35.675 ]' 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.675 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.936 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.936 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.936 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.936 14:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.874 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.135 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.135 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.395 14:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.395 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.395 { 00:22:37.395 "cntlid": 75, 00:22:37.396 "qid": 0, 00:22:37.396 "state": "enabled", 00:22:37.396 "listen_address": { 00:22:37.396 "trtype": "TCP", 00:22:37.396 "adrfam": "IPv4", 00:22:37.396 "traddr": "10.0.0.2", 00:22:37.396 "trsvcid": "4420" 00:22:37.396 }, 00:22:37.396 "peer_address": { 00:22:37.396 "trtype": "TCP", 00:22:37.396 "adrfam": "IPv4", 00:22:37.396 "traddr": "10.0.0.1", 00:22:37.396 "trsvcid": "50102" 00:22:37.396 }, 00:22:37.396 "auth": { 00:22:37.396 "state": "completed", 00:22:37.396 "digest": "sha384", 00:22:37.396 "dhgroup": "ffdhe4096" 00:22:37.396 } 00:22:37.396 } 00:22:37.396 ]' 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.396 14:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.655 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:38.225 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.485 14:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.744 00:22:38.744 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.744 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.744 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:39.004 { 00:22:39.004 "cntlid": 77, 00:22:39.004 "qid": 0, 00:22:39.004 "state": "enabled", 00:22:39.004 "listen_address": { 00:22:39.004 "trtype": "TCP", 00:22:39.004 "adrfam": "IPv4", 00:22:39.004 "traddr": "10.0.0.2", 00:22:39.004 "trsvcid": "4420" 00:22:39.004 }, 00:22:39.004 "peer_address": { 00:22:39.004 "trtype": "TCP", 00:22:39.004 "adrfam": "IPv4", 00:22:39.004 "traddr": "10.0.0.1", 00:22:39.004 "trsvcid": "45162" 00:22:39.004 }, 00:22:39.004 "auth": { 00:22:39.004 "state": "completed", 00:22:39.004 "digest": "sha384", 00:22:39.004 "dhgroup": "ffdhe4096" 00:22:39.004 } 00:22:39.004 } 00:22:39.004 ]' 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.004 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.263 14:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:39.831 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.091 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.351 00:22:40.351 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.351 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.351 14:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.610 { 00:22:40.610 "cntlid": 79, 00:22:40.610 "qid": 0, 00:22:40.610 "state": "enabled", 00:22:40.610 "listen_address": { 00:22:40.610 "trtype": "TCP", 00:22:40.610 "adrfam": "IPv4", 00:22:40.610 "traddr": "10.0.0.2", 00:22:40.610 "trsvcid": "4420" 00:22:40.610 }, 00:22:40.610 "peer_address": { 00:22:40.610 "trtype": "TCP", 00:22:40.610 "adrfam": "IPv4", 00:22:40.610 "traddr": "10.0.0.1", 00:22:40.610 "trsvcid": "45178" 00:22:40.610 }, 00:22:40.610 "auth": { 00:22:40.610 "state": "completed", 00:22:40.610 "digest": "sha384", 00:22:40.610 "dhgroup": "ffdhe4096" 00:22:40.610 } 00:22:40.610 } 00:22:40.610 ]' 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.610 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.870 14:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.440 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.699 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.959 00:22:41.959 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:41.959 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:41.959 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.219 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:42.219 { 00:22:42.219 "cntlid": 81, 00:22:42.219 "qid": 0, 00:22:42.219 "state": "enabled", 00:22:42.219 "listen_address": { 00:22:42.219 "trtype": "TCP", 00:22:42.219 "adrfam": "IPv4", 00:22:42.219 "traddr": "10.0.0.2", 00:22:42.219 "trsvcid": "4420" 00:22:42.219 }, 00:22:42.219 "peer_address": { 00:22:42.219 "trtype": "TCP", 00:22:42.219 "adrfam": "IPv4", 00:22:42.219 "traddr": "10.0.0.1", 00:22:42.219 "trsvcid": "45226" 00:22:42.219 }, 00:22:42.219 "auth": { 00:22:42.219 "state": "completed", 00:22:42.219 "digest": "sha384", 00:22:42.219 "dhgroup": "ffdhe6144" 00:22:42.219 } 00:22:42.219 } 00:22:42.220 ]' 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.220 14:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.480 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.105 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.366 14:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.628 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.889 { 00:22:43.889 "cntlid": 83, 00:22:43.889 "qid": 0, 00:22:43.889 "state": "enabled", 00:22:43.889 "listen_address": { 00:22:43.889 "trtype": "TCP", 00:22:43.889 "adrfam": "IPv4", 00:22:43.889 "traddr": "10.0.0.2", 00:22:43.889 "trsvcid": "4420" 00:22:43.889 }, 00:22:43.889 "peer_address": { 00:22:43.889 "trtype": "TCP", 00:22:43.889 "adrfam": "IPv4", 00:22:43.889 "traddr": "10.0.0.1", 00:22:43.889 "trsvcid": "45250" 00:22:43.889 }, 00:22:43.889 "auth": { 00:22:43.889 "state": "completed", 00:22:43.889 "digest": "sha384", 00:22:43.889 "dhgroup": "ffdhe6144" 00:22:43.889 } 00:22:43.889 } 00:22:43.889 ]' 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:43.889 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:44.150 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.150 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.150 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.150 14:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.090 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.091 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.351 00:22:45.613 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.613 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.613 14:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.613 { 00:22:45.613 "cntlid": 85, 00:22:45.613 "qid": 0, 00:22:45.613 "state": "enabled", 00:22:45.613 "listen_address": { 00:22:45.613 "trtype": "TCP", 00:22:45.613 "adrfam": "IPv4", 00:22:45.613 "traddr": "10.0.0.2", 00:22:45.613 "trsvcid": "4420" 00:22:45.613 }, 00:22:45.613 "peer_address": { 00:22:45.613 "trtype": "TCP", 00:22:45.613 "adrfam": "IPv4", 00:22:45.613 "traddr": "10.0.0.1", 00:22:45.613 "trsvcid": "45274" 00:22:45.613 }, 00:22:45.613 "auth": { 00:22:45.613 "state": "completed", 00:22:45.613 "digest": "sha384", 00:22:45.613 "dhgroup": "ffdhe6144" 00:22:45.613 } 00:22:45.613 } 00:22:45.613 ]' 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.613 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.874 14:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:46.817 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.078 00:22:47.078 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:47.078 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:47.078 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.339 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:47.339 { 00:22:47.339 "cntlid": 87, 00:22:47.339 "qid": 0, 00:22:47.340 "state": "enabled", 00:22:47.340 "listen_address": { 00:22:47.340 "trtype": "TCP", 00:22:47.340 "adrfam": "IPv4", 00:22:47.340 "traddr": "10.0.0.2", 00:22:47.340 "trsvcid": "4420" 00:22:47.340 }, 00:22:47.340 "peer_address": { 00:22:47.340 "trtype": "TCP", 00:22:47.340 "adrfam": "IPv4", 00:22:47.340 "traddr": "10.0.0.1", 00:22:47.340 "trsvcid": "45288" 00:22:47.340 }, 00:22:47.340 "auth": { 00:22:47.340 "state": "completed", 00:22:47.340 "digest": "sha384", 00:22:47.340 "dhgroup": "ffdhe6144" 00:22:47.340 } 00:22:47.340 } 00:22:47.340 ]' 00:22:47.340 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:47.340 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.340 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:47.340 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.340 14:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:47.601 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.601 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.601 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.601 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:48.544 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.545 14:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.545 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.116 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.116 14:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.377 { 00:22:49.377 "cntlid": 89, 00:22:49.377 "qid": 0, 00:22:49.377 "state": "enabled", 00:22:49.377 "listen_address": { 00:22:49.377 "trtype": "TCP", 00:22:49.377 "adrfam": "IPv4", 00:22:49.377 "traddr": "10.0.0.2", 00:22:49.377 "trsvcid": "4420" 00:22:49.377 }, 00:22:49.377 "peer_address": { 00:22:49.377 "trtype": "TCP", 00:22:49.377 "adrfam": "IPv4", 00:22:49.377 "traddr": "10.0.0.1", 00:22:49.377 "trsvcid": "56892" 00:22:49.377 }, 00:22:49.377 "auth": { 00:22:49.377 "state": "completed", 00:22:49.377 "digest": "sha384", 00:22:49.377 "dhgroup": "ffdhe8192" 00:22:49.377 } 00:22:49.377 } 00:22:49.377 ]' 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.377 14:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.636 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:50.207 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.468 14:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.040 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:51.040 { 00:22:51.040 "cntlid": 91, 00:22:51.040 "qid": 0, 00:22:51.040 "state": "enabled", 00:22:51.040 "listen_address": { 00:22:51.040 "trtype": "TCP", 00:22:51.040 "adrfam": "IPv4", 00:22:51.040 "traddr": "10.0.0.2", 00:22:51.040 "trsvcid": "4420" 00:22:51.040 }, 00:22:51.040 "peer_address": { 00:22:51.040 "trtype": "TCP", 00:22:51.040 "adrfam": "IPv4", 00:22:51.040 "traddr": "10.0.0.1", 00:22:51.040 "trsvcid": "56908" 00:22:51.040 }, 00:22:51.040 "auth": { 00:22:51.040 "state": "completed", 00:22:51.040 "digest": "sha384", 00:22:51.040 "dhgroup": "ffdhe8192" 00:22:51.040 } 00:22:51.040 } 00:22:51.040 ]' 00:22:51.040 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.300 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.561 14:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.132 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.392 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.393 14:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.963 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.963 { 00:22:52.963 "cntlid": 93, 00:22:52.963 "qid": 0, 00:22:52.963 "state": "enabled", 00:22:52.963 "listen_address": { 00:22:52.963 "trtype": "TCP", 00:22:52.963 "adrfam": "IPv4", 00:22:52.963 "traddr": "10.0.0.2", 00:22:52.963 "trsvcid": "4420" 00:22:52.963 }, 00:22:52.963 "peer_address": { 00:22:52.963 "trtype": "TCP", 00:22:52.963 "adrfam": "IPv4", 00:22:52.963 "traddr": "10.0.0.1", 00:22:52.963 "trsvcid": "56926" 00:22:52.963 }, 00:22:52.963 "auth": { 00:22:52.963 "state": "completed", 00:22:52.963 "digest": "sha384", 00:22:52.963 "dhgroup": "ffdhe8192" 00:22:52.963 } 00:22:52.963 } 00:22:52.963 ]' 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:52.963 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.225 14:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.168 14:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:54.740 00:22:54.740 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:54.740 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:54.740 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.001 { 00:22:55.001 "cntlid": 95, 00:22:55.001 "qid": 0, 00:22:55.001 "state": "enabled", 00:22:55.001 "listen_address": { 00:22:55.001 "trtype": "TCP", 00:22:55.001 "adrfam": "IPv4", 00:22:55.001 "traddr": "10.0.0.2", 00:22:55.001 "trsvcid": "4420" 00:22:55.001 }, 00:22:55.001 "peer_address": { 00:22:55.001 "trtype": "TCP", 00:22:55.001 "adrfam": "IPv4", 00:22:55.001 "traddr": "10.0.0.1", 00:22:55.001 "trsvcid": "56952" 00:22:55.001 }, 00:22:55.001 "auth": { 00:22:55.001 "state": "completed", 00:22:55.001 "digest": "sha384", 00:22:55.001 "dhgroup": "ffdhe8192" 00:22:55.001 } 00:22:55.001 } 00:22:55.001 ]' 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.001 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.262 14:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.834 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.095 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.356 00:22:56.356 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:56.356 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:56.356 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.356 14:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.356 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:56.618 { 00:22:56.618 "cntlid": 97, 00:22:56.618 "qid": 0, 00:22:56.618 "state": "enabled", 00:22:56.618 "listen_address": { 00:22:56.618 "trtype": "TCP", 00:22:56.618 "adrfam": "IPv4", 00:22:56.618 "traddr": "10.0.0.2", 00:22:56.618 "trsvcid": "4420" 00:22:56.618 }, 00:22:56.618 "peer_address": { 00:22:56.618 "trtype": "TCP", 00:22:56.618 "adrfam": "IPv4", 00:22:56.618 "traddr": "10.0.0.1", 00:22:56.618 "trsvcid": "56972" 00:22:56.618 }, 00:22:56.618 "auth": { 00:22:56.618 "state": "completed", 00:22:56.618 "digest": "sha512", 00:22:56.618 "dhgroup": "null" 00:22:56.618 } 00:22:56.618 } 00:22:56.618 ]' 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.618 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.879 14:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:57.449 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.709 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.970 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:57.970 { 00:22:57.970 "cntlid": 99, 00:22:57.970 "qid": 0, 00:22:57.970 "state": "enabled", 00:22:57.970 "listen_address": { 00:22:57.970 "trtype": "TCP", 00:22:57.970 "adrfam": "IPv4", 00:22:57.970 "traddr": "10.0.0.2", 00:22:57.970 "trsvcid": "4420" 00:22:57.970 }, 00:22:57.970 "peer_address": { 00:22:57.970 "trtype": "TCP", 00:22:57.970 "adrfam": "IPv4", 00:22:57.970 "traddr": "10.0.0.1", 00:22:57.970 "trsvcid": "57004" 00:22:57.970 }, 00:22:57.970 "auth": { 00:22:57.970 "state": "completed", 00:22:57.970 "digest": "sha512", 00:22:57.970 "dhgroup": "null" 00:22:57.970 } 00:22:57.970 } 00:22:57.970 ]' 00:22:57.970 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.231 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.493 14:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:59.064 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.324 14:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.585 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:59.585 { 00:22:59.585 "cntlid": 101, 00:22:59.585 "qid": 0, 00:22:59.585 "state": "enabled", 00:22:59.585 "listen_address": { 00:22:59.585 "trtype": "TCP", 00:22:59.585 "adrfam": "IPv4", 00:22:59.585 "traddr": "10.0.0.2", 00:22:59.585 "trsvcid": "4420" 00:22:59.585 }, 00:22:59.585 "peer_address": { 00:22:59.585 "trtype": "TCP", 00:22:59.585 "adrfam": "IPv4", 00:22:59.585 "traddr": "10.0.0.1", 00:22:59.585 "trsvcid": "47226" 00:22:59.585 }, 00:22:59.585 "auth": { 00:22:59.585 "state": "completed", 00:22:59.585 "digest": "sha512", 00:22:59.585 "dhgroup": "null" 00:22:59.585 } 00:22:59.585 } 00:22:59.585 ]' 00:22:59.585 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.844 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.103 14:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:00.672 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:00.932 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.194 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.194 { 00:23:01.194 "cntlid": 103, 00:23:01.194 "qid": 0, 00:23:01.194 "state": "enabled", 00:23:01.194 "listen_address": { 00:23:01.194 "trtype": "TCP", 00:23:01.194 "adrfam": "IPv4", 00:23:01.194 "traddr": "10.0.0.2", 00:23:01.194 "trsvcid": "4420" 00:23:01.194 }, 00:23:01.194 "peer_address": { 00:23:01.194 "trtype": "TCP", 00:23:01.194 "adrfam": "IPv4", 00:23:01.194 "traddr": "10.0.0.1", 00:23:01.194 "trsvcid": "47244" 00:23:01.194 }, 00:23:01.194 "auth": { 00:23:01.194 "state": "completed", 00:23:01.194 "digest": "sha512", 00:23:01.194 "dhgroup": "null" 00:23:01.194 } 00:23:01.194 } 00:23:01.194 ]' 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.194 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.485 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:23:01.485 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.485 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.485 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.485 14:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.485 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.428 14:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.690 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.690 14:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:02.950 { 00:23:02.950 "cntlid": 105, 00:23:02.950 "qid": 0, 00:23:02.950 "state": "enabled", 00:23:02.950 "listen_address": { 00:23:02.950 "trtype": "TCP", 00:23:02.950 "adrfam": "IPv4", 00:23:02.950 "traddr": "10.0.0.2", 00:23:02.950 "trsvcid": "4420" 00:23:02.950 }, 00:23:02.950 "peer_address": { 00:23:02.950 "trtype": "TCP", 00:23:02.950 "adrfam": "IPv4", 00:23:02.950 "traddr": "10.0.0.1", 00:23:02.950 "trsvcid": "47272" 00:23:02.950 }, 00:23:02.950 "auth": { 00:23:02.950 "state": "completed", 00:23:02.950 "digest": "sha512", 00:23:02.950 "dhgroup": "ffdhe2048" 00:23:02.950 } 00:23:02.950 } 00:23:02.950 ]' 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.950 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.210 14:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:03.781 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.120 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.120 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:04.382 { 00:23:04.382 "cntlid": 107, 00:23:04.382 "qid": 0, 00:23:04.382 "state": "enabled", 00:23:04.382 "listen_address": { 00:23:04.382 "trtype": "TCP", 00:23:04.382 "adrfam": "IPv4", 00:23:04.382 "traddr": "10.0.0.2", 00:23:04.382 "trsvcid": "4420" 00:23:04.382 }, 00:23:04.382 "peer_address": { 00:23:04.382 "trtype": "TCP", 00:23:04.382 "adrfam": "IPv4", 00:23:04.382 "traddr": "10.0.0.1", 00:23:04.382 "trsvcid": "47312" 00:23:04.382 }, 00:23:04.382 "auth": { 00:23:04.382 "state": "completed", 00:23:04.382 "digest": "sha512", 00:23:04.382 "dhgroup": "ffdhe2048" 00:23:04.382 } 00:23:04.382 } 00:23:04.382 ]' 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:04.382 14:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.382 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.643 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.585 14:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.585 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.844 00:23:05.844 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:05.844 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:05.844 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.103 { 00:23:06.103 "cntlid": 109, 00:23:06.103 "qid": 0, 00:23:06.103 "state": "enabled", 00:23:06.103 "listen_address": { 00:23:06.103 "trtype": "TCP", 00:23:06.103 "adrfam": "IPv4", 00:23:06.103 "traddr": "10.0.0.2", 00:23:06.103 "trsvcid": "4420" 00:23:06.103 }, 00:23:06.103 "peer_address": { 00:23:06.103 "trtype": "TCP", 00:23:06.103 "adrfam": "IPv4", 00:23:06.103 "traddr": "10.0.0.1", 00:23:06.103 "trsvcid": "47344" 00:23:06.103 }, 00:23:06.103 "auth": { 00:23:06.103 "state": "completed", 00:23:06.103 "digest": "sha512", 00:23:06.103 "dhgroup": "ffdhe2048" 00:23:06.103 } 00:23:06.103 } 00:23:06.103 ]' 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.103 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.363 14:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.932 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.192 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:07.451 00:23:07.451 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:07.451 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.451 14:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:07.711 { 00:23:07.711 "cntlid": 111, 00:23:07.711 "qid": 0, 00:23:07.711 "state": "enabled", 00:23:07.711 "listen_address": { 00:23:07.711 "trtype": "TCP", 00:23:07.711 "adrfam": "IPv4", 00:23:07.711 "traddr": "10.0.0.2", 00:23:07.711 "trsvcid": "4420" 00:23:07.711 }, 00:23:07.711 "peer_address": { 00:23:07.711 "trtype": "TCP", 00:23:07.711 "adrfam": "IPv4", 00:23:07.711 "traddr": "10.0.0.1", 00:23:07.711 "trsvcid": "47380" 00:23:07.711 }, 00:23:07.711 "auth": { 00:23:07.711 "state": "completed", 00:23:07.711 "digest": "sha512", 00:23:07.711 "dhgroup": "ffdhe2048" 00:23:07.711 } 00:23:07.711 } 00:23:07.711 ]' 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.711 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.971 14:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.540 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.800 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.060 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:09.060 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.320 { 00:23:09.320 "cntlid": 113, 00:23:09.320 "qid": 0, 00:23:09.320 "state": "enabled", 00:23:09.320 "listen_address": { 00:23:09.320 "trtype": "TCP", 00:23:09.320 "adrfam": "IPv4", 00:23:09.320 "traddr": "10.0.0.2", 00:23:09.320 "trsvcid": "4420" 00:23:09.320 }, 00:23:09.320 "peer_address": { 00:23:09.320 "trtype": "TCP", 00:23:09.320 "adrfam": "IPv4", 00:23:09.320 "traddr": "10.0.0.1", 00:23:09.320 "trsvcid": "57130" 00:23:09.320 }, 00:23:09.320 "auth": { 00:23:09.320 "state": "completed", 00:23:09.320 "digest": "sha512", 00:23:09.320 "dhgroup": "ffdhe3072" 00:23:09.320 } 00:23:09.320 } 00:23:09.320 ]' 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.320 14:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.582 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.151 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.411 14:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.670 00:23:10.671 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:10.671 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:10.671 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:10.930 { 00:23:10.930 "cntlid": 115, 00:23:10.930 "qid": 0, 00:23:10.930 "state": "enabled", 00:23:10.930 "listen_address": { 00:23:10.930 "trtype": "TCP", 00:23:10.930 "adrfam": "IPv4", 00:23:10.930 "traddr": "10.0.0.2", 00:23:10.930 "trsvcid": "4420" 00:23:10.930 }, 00:23:10.930 "peer_address": { 00:23:10.930 "trtype": "TCP", 00:23:10.930 "adrfam": "IPv4", 00:23:10.930 "traddr": "10.0.0.1", 00:23:10.930 "trsvcid": "57146" 00:23:10.930 }, 00:23:10.930 "auth": { 00:23:10.930 "state": "completed", 00:23:10.930 "digest": "sha512", 00:23:10.930 "dhgroup": "ffdhe3072" 00:23:10.930 } 00:23:10.930 } 00:23:10.930 ]' 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:10.930 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.931 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.190 14:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:11.761 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:11.762 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.021 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.022 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.022 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.281 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:12.281 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.541 14:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:12.541 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:12.541 { 00:23:12.541 "cntlid": 117, 00:23:12.541 "qid": 0, 00:23:12.541 "state": "enabled", 00:23:12.541 "listen_address": { 00:23:12.541 "trtype": "TCP", 00:23:12.541 "adrfam": "IPv4", 00:23:12.541 "traddr": "10.0.0.2", 00:23:12.541 "trsvcid": "4420" 00:23:12.541 }, 00:23:12.541 "peer_address": { 00:23:12.541 "trtype": "TCP", 00:23:12.541 "adrfam": "IPv4", 00:23:12.541 "traddr": "10.0.0.1", 00:23:12.541 "trsvcid": "57190" 00:23:12.541 }, 00:23:12.541 "auth": { 00:23:12.541 "state": "completed", 00:23:12.541 "digest": "sha512", 00:23:12.541 "dhgroup": "ffdhe3072" 00:23:12.541 } 00:23:12.541 } 00:23:12.541 ]' 00:23:12.541 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:12.541 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.541 14:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:12.541 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:12.541 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:12.541 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.541 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.541 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.801 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:13.371 14:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.631 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.891 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:13.891 { 00:23:13.891 "cntlid": 119, 00:23:13.891 "qid": 0, 00:23:13.891 "state": "enabled", 00:23:13.891 "listen_address": { 00:23:13.891 "trtype": "TCP", 00:23:13.891 "adrfam": "IPv4", 00:23:13.891 "traddr": "10.0.0.2", 00:23:13.891 "trsvcid": "4420" 00:23:13.891 }, 00:23:13.891 "peer_address": { 00:23:13.891 "trtype": "TCP", 00:23:13.891 "adrfam": "IPv4", 00:23:13.891 "traddr": "10.0.0.1", 00:23:13.891 "trsvcid": "57214" 00:23:13.891 }, 00:23:13.891 "auth": { 00:23:13.891 "state": "completed", 00:23:13.891 "digest": "sha512", 00:23:13.891 "dhgroup": "ffdhe3072" 00:23:13.891 } 00:23:13.891 } 00:23:13.891 ]' 00:23:13.891 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.152 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.412 14:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:14.983 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.984 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.244 00:23:15.244 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:15.244 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:15.244 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.503 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.503 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.503 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.504 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.504 14:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.504 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:15.504 { 00:23:15.504 "cntlid": 121, 00:23:15.504 "qid": 0, 00:23:15.504 "state": "enabled", 00:23:15.504 "listen_address": { 00:23:15.504 "trtype": "TCP", 00:23:15.504 "adrfam": "IPv4", 00:23:15.504 "traddr": "10.0.0.2", 00:23:15.504 "trsvcid": "4420" 00:23:15.504 }, 00:23:15.504 "peer_address": { 00:23:15.504 "trtype": "TCP", 00:23:15.504 "adrfam": "IPv4", 00:23:15.504 "traddr": "10.0.0.1", 00:23:15.504 "trsvcid": "57238" 00:23:15.504 }, 00:23:15.504 "auth": { 00:23:15.504 "state": "completed", 00:23:15.504 "digest": "sha512", 00:23:15.504 "dhgroup": "ffdhe4096" 00:23:15.504 } 00:23:15.504 } 00:23:15.504 ]' 00:23:15.504 14:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.504 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.765 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:16.335 14:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.595 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.855 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.855 14:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.117 { 00:23:17.117 "cntlid": 123, 00:23:17.117 "qid": 0, 00:23:17.117 "state": "enabled", 00:23:17.117 "listen_address": { 00:23:17.117 "trtype": "TCP", 00:23:17.117 "adrfam": "IPv4", 00:23:17.117 "traddr": "10.0.0.2", 00:23:17.117 "trsvcid": "4420" 00:23:17.117 }, 00:23:17.117 "peer_address": { 00:23:17.117 "trtype": "TCP", 00:23:17.117 "adrfam": "IPv4", 00:23:17.117 "traddr": "10.0.0.1", 00:23:17.117 "trsvcid": "57266" 00:23:17.117 }, 00:23:17.117 "auth": { 00:23:17.117 "state": "completed", 00:23:17.117 "digest": "sha512", 00:23:17.117 "dhgroup": "ffdhe4096" 00:23:17.117 } 00:23:17.117 } 00:23:17.117 ]' 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.117 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.396 14:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:18.026 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.287 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.548 00:23:18.548 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:18.548 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:18.548 14:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.548 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.548 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.548 14:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:18.548 14:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.549 14:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:18.549 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:18.549 { 00:23:18.549 "cntlid": 125, 00:23:18.549 "qid": 0, 00:23:18.549 "state": "enabled", 00:23:18.549 "listen_address": { 00:23:18.549 "trtype": "TCP", 00:23:18.549 "adrfam": "IPv4", 00:23:18.549 "traddr": "10.0.0.2", 00:23:18.549 "trsvcid": "4420" 00:23:18.549 }, 00:23:18.549 "peer_address": { 00:23:18.549 "trtype": "TCP", 00:23:18.549 "adrfam": "IPv4", 00:23:18.549 "traddr": "10.0.0.1", 00:23:18.549 "trsvcid": "59306" 00:23:18.549 }, 00:23:18.549 "auth": { 00:23:18.549 "state": "completed", 00:23:18.549 "digest": "sha512", 00:23:18.549 "dhgroup": "ffdhe4096" 00:23:18.549 } 00:23:18.549 } 00:23:18.549 ]' 00:23:18.549 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.810 14:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:19.752 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:20.013 00:23:20.013 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:20.013 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:20.013 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.275 { 00:23:20.275 "cntlid": 127, 00:23:20.275 "qid": 0, 00:23:20.275 "state": "enabled", 00:23:20.275 "listen_address": { 00:23:20.275 "trtype": "TCP", 00:23:20.275 "adrfam": "IPv4", 00:23:20.275 "traddr": "10.0.0.2", 00:23:20.275 "trsvcid": "4420" 00:23:20.275 }, 00:23:20.275 "peer_address": { 00:23:20.275 "trtype": "TCP", 00:23:20.275 "adrfam": "IPv4", 00:23:20.275 "traddr": "10.0.0.1", 00:23:20.275 "trsvcid": "59332" 00:23:20.275 }, 00:23:20.275 "auth": { 00:23:20.275 "state": "completed", 00:23:20.275 "digest": "sha512", 00:23:20.275 "dhgroup": "ffdhe4096" 00:23:20.275 } 00:23:20.275 } 00:23:20.275 ]' 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.275 14:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.537 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.479 14:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.739 00:23:21.739 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:21.739 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.739 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:22.000 { 00:23:22.000 "cntlid": 129, 00:23:22.000 "qid": 0, 00:23:22.000 "state": "enabled", 00:23:22.000 "listen_address": { 00:23:22.000 "trtype": "TCP", 00:23:22.000 "adrfam": "IPv4", 00:23:22.000 "traddr": "10.0.0.2", 00:23:22.000 "trsvcid": "4420" 00:23:22.000 }, 00:23:22.000 "peer_address": { 00:23:22.000 "trtype": "TCP", 00:23:22.000 "adrfam": "IPv4", 00:23:22.000 "traddr": "10.0.0.1", 00:23:22.000 "trsvcid": "59352" 00:23:22.000 }, 00:23:22.000 "auth": { 00:23:22.000 "state": "completed", 00:23:22.000 "digest": "sha512", 00:23:22.000 "dhgroup": "ffdhe6144" 00:23:22.000 } 00:23:22.000 } 00:23:22.000 ]' 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.000 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.261 14:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.205 14:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.466 00:23:23.466 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:23.466 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.466 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:23.726 { 00:23:23.726 "cntlid": 131, 00:23:23.726 "qid": 0, 00:23:23.726 "state": "enabled", 00:23:23.726 "listen_address": { 00:23:23.726 "trtype": "TCP", 00:23:23.726 "adrfam": "IPv4", 00:23:23.726 "traddr": "10.0.0.2", 00:23:23.726 "trsvcid": "4420" 00:23:23.726 }, 00:23:23.726 "peer_address": { 00:23:23.726 "trtype": "TCP", 00:23:23.726 "adrfam": "IPv4", 00:23:23.726 "traddr": "10.0.0.1", 00:23:23.726 "trsvcid": "59386" 00:23:23.726 }, 00:23:23.726 "auth": { 00:23:23.726 "state": "completed", 00:23:23.726 "digest": "sha512", 00:23:23.726 "dhgroup": "ffdhe6144" 00:23:23.726 } 00:23:23.726 } 00:23:23.726 ]' 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.726 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:23.727 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:23.727 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:23.727 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.727 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.727 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.986 14:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:24.556 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.817 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.076 00:23:25.076 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.076 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.076 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.336 { 00:23:25.336 "cntlid": 133, 00:23:25.336 "qid": 0, 00:23:25.336 "state": "enabled", 00:23:25.336 "listen_address": { 00:23:25.336 "trtype": "TCP", 00:23:25.336 "adrfam": "IPv4", 00:23:25.336 "traddr": "10.0.0.2", 00:23:25.336 "trsvcid": "4420" 00:23:25.336 }, 00:23:25.336 "peer_address": { 00:23:25.336 "trtype": "TCP", 00:23:25.336 "adrfam": "IPv4", 00:23:25.336 "traddr": "10.0.0.1", 00:23:25.336 "trsvcid": "59412" 00:23:25.336 }, 00:23:25.336 "auth": { 00:23:25.336 "state": "completed", 00:23:25.336 "digest": "sha512", 00:23:25.336 "dhgroup": "ffdhe6144" 00:23:25.336 } 00:23:25.336 } 00:23:25.336 ]' 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.336 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:25.596 14:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:25.596 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.596 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.596 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.596 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:26.535 14:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.535 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:26.795 00:23:26.795 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:26.795 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:26.795 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:27.054 { 00:23:27.054 "cntlid": 135, 00:23:27.054 "qid": 0, 00:23:27.054 "state": "enabled", 00:23:27.054 "listen_address": { 00:23:27.054 "trtype": "TCP", 00:23:27.054 "adrfam": "IPv4", 00:23:27.054 "traddr": "10.0.0.2", 00:23:27.054 "trsvcid": "4420" 00:23:27.054 }, 00:23:27.054 "peer_address": { 00:23:27.054 "trtype": "TCP", 00:23:27.054 "adrfam": "IPv4", 00:23:27.054 "traddr": "10.0.0.1", 00:23:27.054 "trsvcid": "59438" 00:23:27.054 }, 00:23:27.054 "auth": { 00:23:27.054 "state": "completed", 00:23:27.054 "digest": "sha512", 00:23:27.054 "dhgroup": "ffdhe6144" 00:23:27.054 } 00:23:27.054 } 00:23:27.054 ]' 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:27.054 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.055 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.315 14:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:27.885 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.145 14:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.715 00:23:28.715 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.715 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.715 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:28.976 { 00:23:28.976 "cntlid": 137, 00:23:28.976 "qid": 0, 00:23:28.976 "state": "enabled", 00:23:28.976 "listen_address": { 00:23:28.976 "trtype": "TCP", 00:23:28.976 "adrfam": "IPv4", 00:23:28.976 "traddr": "10.0.0.2", 00:23:28.976 "trsvcid": "4420" 00:23:28.976 }, 00:23:28.976 "peer_address": { 00:23:28.976 "trtype": "TCP", 00:23:28.976 "adrfam": "IPv4", 00:23:28.976 "traddr": "10.0.0.1", 00:23:28.976 "trsvcid": "54428" 00:23:28.976 }, 00:23:28.976 "auth": { 00:23:28.976 "state": "completed", 00:23:28.976 "digest": "sha512", 00:23:28.976 "dhgroup": "ffdhe8192" 00:23:28.976 } 00:23:28.976 } 00:23:28.976 ]' 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.976 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.237 14:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.806 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.066 14:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.636 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:30.636 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:30.636 { 00:23:30.636 "cntlid": 139, 00:23:30.636 "qid": 0, 00:23:30.636 "state": "enabled", 00:23:30.636 "listen_address": { 00:23:30.636 "trtype": "TCP", 00:23:30.636 "adrfam": "IPv4", 00:23:30.636 "traddr": "10.0.0.2", 00:23:30.636 "trsvcid": "4420" 00:23:30.636 }, 00:23:30.636 "peer_address": { 00:23:30.636 "trtype": "TCP", 00:23:30.636 "adrfam": "IPv4", 00:23:30.636 "traddr": "10.0.0.1", 00:23:30.636 "trsvcid": "54452" 00:23:30.636 }, 00:23:30.636 "auth": { 00:23:30.636 "state": "completed", 00:23:30.636 "digest": "sha512", 00:23:30.636 "dhgroup": "ffdhe8192" 00:23:30.636 } 00:23:30.636 } 00:23:30.636 ]' 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.896 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.157 14:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:01:MDkzOTQxZjBkZDFhYjBhODQ1MzVlMTBmY2I3ZTU3OGWTJdlp: --dhchap-ctrl-secret DHHC-1:02:OWM5ZjZlNjE5YjkzMzI5ZmY3ZmY1NjNhOWU4YmU0ZWE3OTkzZThhYjYwMDI2MjdlhKe41w==: 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:31.727 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.728 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.297 00:23:32.297 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:32.297 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.297 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:32.557 14:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.557 { 00:23:32.557 "cntlid": 141, 00:23:32.557 "qid": 0, 00:23:32.557 "state": "enabled", 00:23:32.557 "listen_address": { 00:23:32.557 "trtype": "TCP", 00:23:32.557 "adrfam": "IPv4", 00:23:32.557 "traddr": "10.0.0.2", 00:23:32.557 "trsvcid": "4420" 00:23:32.557 }, 00:23:32.557 "peer_address": { 00:23:32.557 "trtype": "TCP", 00:23:32.557 "adrfam": "IPv4", 00:23:32.557 "traddr": "10.0.0.1", 00:23:32.557 "trsvcid": "54486" 00:23:32.557 }, 00:23:32.557 "auth": { 00:23:32.557 "state": "completed", 00:23:32.557 "digest": "sha512", 00:23:32.557 "dhgroup": "ffdhe8192" 00:23:32.557 } 00:23:32.557 } 00:23:32.557 ]' 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.557 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.816 14:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:02:ZjE4ZmJhNmJlYTlhMDQyNTA4YjdmYzVhYmM4OGQwNjdlNzVmYTU5NzY4NmQzYjc4nLttMA==: --dhchap-ctrl-secret DHHC-1:01:Mjg0ZGE3NDJjYTBhYzc5ZjM3ZDk1YjI2YTM5M2U1MzexdOV+: 00:23:33.383 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.383 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:33.383 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.383 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:33.643 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:34.213 00:23:34.213 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:34.213 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.213 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:34.474 { 00:23:34.474 "cntlid": 143, 00:23:34.474 "qid": 0, 00:23:34.474 "state": "enabled", 00:23:34.474 "listen_address": { 00:23:34.474 "trtype": "TCP", 00:23:34.474 "adrfam": "IPv4", 00:23:34.474 "traddr": "10.0.0.2", 00:23:34.474 "trsvcid": "4420" 00:23:34.474 }, 00:23:34.474 "peer_address": { 00:23:34.474 "trtype": "TCP", 00:23:34.474 "adrfam": "IPv4", 00:23:34.474 "traddr": "10.0.0.1", 00:23:34.474 "trsvcid": "54524" 00:23:34.474 }, 00:23:34.474 "auth": { 00:23:34.474 "state": "completed", 00:23:34.474 "digest": "sha512", 00:23:34.474 "dhgroup": "ffdhe8192" 00:23:34.474 } 00:23:34.474 } 00:23:34.474 ]' 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:34.474 14:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:34.474 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.474 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.474 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.735 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:35.376 14:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.636 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.206 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:36.206 { 00:23:36.206 "cntlid": 145, 00:23:36.206 "qid": 0, 00:23:36.206 "state": "enabled", 00:23:36.206 "listen_address": { 00:23:36.206 "trtype": "TCP", 00:23:36.206 "adrfam": "IPv4", 00:23:36.206 "traddr": "10.0.0.2", 00:23:36.206 "trsvcid": "4420" 00:23:36.206 }, 00:23:36.206 "peer_address": { 00:23:36.206 "trtype": "TCP", 00:23:36.206 "adrfam": "IPv4", 00:23:36.206 "traddr": "10.0.0.1", 00:23:36.206 "trsvcid": "54560" 00:23:36.206 }, 00:23:36.206 "auth": { 00:23:36.206 "state": "completed", 00:23:36.206 "digest": "sha512", 00:23:36.206 "dhgroup": "ffdhe8192" 00:23:36.206 } 00:23:36.206 } 00:23:36.206 ]' 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.206 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:36.466 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.466 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.466 14:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.466 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:00:MTk5ZGEwZDUwZGY4OGQ2ZjdmMWIyMjk3Y2RjNTIwYjkwYmYwZGUyZjhjZDI1NTVkE+lxEQ==: --dhchap-ctrl-secret DHHC-1:03:NGEwNWEwODZjMDJiYzgzNWIxNmI3MmQxZTZjMGFkOTE0ZTc4NzVmYmFjODIyZTM2MWJlODA4NTZjMjg4M2Q0Zhe9U/0=: 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:37.036 14:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:37.606 request: 00:23:37.606 { 00:23:37.606 "name": "nvme0", 00:23:37.606 "trtype": "tcp", 00:23:37.606 "traddr": "10.0.0.2", 00:23:37.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:37.606 "adrfam": "ipv4", 00:23:37.606 "trsvcid": "4420", 00:23:37.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:37.606 "dhchap_key": "key2", 00:23:37.606 "method": "bdev_nvme_attach_controller", 00:23:37.606 "req_id": 1 00:23:37.606 } 00:23:37.606 Got JSON-RPC error response 00:23:37.606 response: 00:23:37.606 { 00:23:37.606 "code": -5, 00:23:37.606 "message": "Input/output error" 00:23:37.606 } 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.606 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:38.175 request: 00:23:38.175 { 00:23:38.175 "name": "nvme0", 00:23:38.175 "trtype": "tcp", 00:23:38.175 "traddr": "10.0.0.2", 00:23:38.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:38.175 "adrfam": "ipv4", 00:23:38.175 "trsvcid": "4420", 00:23:38.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.176 "dhchap_key": "key1", 00:23:38.176 "dhchap_ctrlr_key": "ckey2", 00:23:38.176 "method": "bdev_nvme_attach_controller", 00:23:38.176 "req_id": 1 00:23:38.176 } 00:23:38.176 Got JSON-RPC error response 00:23:38.176 response: 00:23:38.176 { 00:23:38.176 "code": -5, 00:23:38.176 "message": "Input/output error" 00:23:38.176 } 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.176 14:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.745 request: 00:23:38.745 { 00:23:38.745 "name": "nvme0", 00:23:38.745 "trtype": "tcp", 00:23:38.745 "traddr": "10.0.0.2", 00:23:38.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:38.745 "adrfam": "ipv4", 00:23:38.745 "trsvcid": "4420", 00:23:38.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.745 "dhchap_key": "key1", 00:23:38.745 "dhchap_ctrlr_key": "ckey1", 00:23:38.745 "method": "bdev_nvme_attach_controller", 00:23:38.745 "req_id": 1 00:23:38.745 } 00:23:38.745 Got JSON-RPC error response 00:23:38.745 response: 00:23:38.745 { 00:23:38.745 "code": -5, 00:23:38.745 "message": "Input/output error" 00:23:38.745 } 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 555335 ']' 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 555335' 00:23:38.745 killing process with pid 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 555335 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=581255 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 581255 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 581255 ']' 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:38.745 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.746 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:38.746 14:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 581255 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 581255 ']' 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:39.684 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:39.943 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:40.513 00:23:40.513 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:40.513 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.513 14:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:40.513 { 00:23:40.513 "cntlid": 1, 00:23:40.513 "qid": 0, 00:23:40.513 "state": "enabled", 00:23:40.513 "listen_address": { 00:23:40.513 "trtype": "TCP", 00:23:40.513 "adrfam": "IPv4", 00:23:40.513 "traddr": "10.0.0.2", 00:23:40.513 "trsvcid": "4420" 00:23:40.513 }, 00:23:40.513 "peer_address": { 00:23:40.513 "trtype": "TCP", 00:23:40.513 "adrfam": "IPv4", 00:23:40.513 "traddr": "10.0.0.1", 00:23:40.513 "trsvcid": "58544" 00:23:40.513 }, 00:23:40.513 "auth": { 00:23:40.513 "state": "completed", 00:23:40.513 "digest": "sha512", 00:23:40.513 "dhgroup": "ffdhe8192" 00:23:40.513 } 00:23:40.513 } 00:23:40.513 ]' 00:23:40.513 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.775 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.035 14:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-secret DHHC-1:03:ZDM4ZjljNmY5MDVhMzhhMGI5OWY5YjUzOTk1M2I4NzliYzFlMzFjYWQ1YTRhYzNmNGI0MDkzOGI2YmM5MTY2MVncYfg=: 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:41.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.605 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:23:41.606 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.606 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.606 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.606 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:41.606 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:41.866 request: 00:23:41.866 { 00:23:41.866 "name": "nvme0", 00:23:41.866 "trtype": "tcp", 00:23:41.866 "traddr": "10.0.0.2", 00:23:41.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:41.866 "adrfam": "ipv4", 00:23:41.866 "trsvcid": "4420", 00:23:41.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:41.866 "dhchap_key": "key3", 00:23:41.866 "method": "bdev_nvme_attach_controller", 00:23:41.866 "req_id": 1 00:23:41.866 } 00:23:41.866 Got JSON-RPC error response 00:23:41.866 response: 00:23:41.866 { 00:23:41.866 "code": -5, 00:23:41.866 "message": "Input/output error" 00:23:41.866 } 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:41.866 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.126 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:42.386 request: 00:23:42.386 { 00:23:42.386 "name": "nvme0", 00:23:42.386 "trtype": "tcp", 00:23:42.386 "traddr": "10.0.0.2", 00:23:42.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:42.386 "adrfam": "ipv4", 00:23:42.386 "trsvcid": "4420", 00:23:42.386 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:42.386 "dhchap_key": "key3", 00:23:42.386 "method": "bdev_nvme_attach_controller", 00:23:42.386 "req_id": 1 00:23:42.386 } 00:23:42.386 Got JSON-RPC error response 00:23:42.386 response: 00:23:42.386 { 00:23:42.386 "code": -5, 00:23:42.386 "message": "Input/output error" 00:23:42.386 } 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.386 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:42.387 14:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:42.646 request: 00:23:42.646 { 00:23:42.646 "name": "nvme0", 00:23:42.646 "trtype": "tcp", 00:23:42.646 "traddr": "10.0.0.2", 00:23:42.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:23:42.646 "adrfam": "ipv4", 00:23:42.646 "trsvcid": "4420", 00:23:42.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:42.646 "dhchap_key": "key0", 00:23:42.646 "dhchap_ctrlr_key": "key1", 00:23:42.646 "method": "bdev_nvme_attach_controller", 00:23:42.646 "req_id": 1 00:23:42.646 } 00:23:42.646 Got JSON-RPC error response 00:23:42.646 response: 00:23:42.646 { 00:23:42.646 "code": -5, 00:23:42.646 "message": "Input/output error" 00:23:42.646 } 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:42.646 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:42.906 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.906 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 555367 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 555367 ']' 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 555367 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 555367 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 555367' 00:23:43.166 killing process with pid 555367 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 555367 00:23:43.166 14:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 555367 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:43.426 14:26:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:43.426 rmmod nvme_tcp 00:23:43.426 rmmod nvme_fabrics 00:23:43.426 rmmod nvme_keyring 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 581255 ']' 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 581255 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 581255 ']' 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 581255 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 581255 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 581255' 00:23:43.426 killing process with pid 581255 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 581255 00:23:43.426 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 581255 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.686 14:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.229 14:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.229 14:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wWP /tmp/spdk.key-sha256.j4W /tmp/spdk.key-sha384.FzM /tmp/spdk.key-sha512.4P0 /tmp/spdk.key-sha512.9ZY /tmp/spdk.key-sha384.8rs /tmp/spdk.key-sha256.K7l '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:46.229 00:23:46.229 real 2m20.708s 00:23:46.229 user 5m11.887s 00:23:46.229 sys 0m19.730s 00:23:46.229 14:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:46.229 14:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.229 ************************************ 00:23:46.229 END TEST nvmf_auth_target 00:23:46.229 ************************************ 00:23:46.229 14:26:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:46.229 14:26:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:46.229 14:26:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:23:46.229 14:26:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:46.229 14:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.229 ************************************ 00:23:46.229 START TEST nvmf_bdevio_no_huge 00:23:46.229 ************************************ 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:46.229 * Looking for test storage... 00:23:46.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.229 14:26:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.374 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:54.375 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:54.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:54.375 Found net devices under 0000:31:00.0: cvl_0_0 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:54.375 Found net devices under 0000:31:00.1: cvl_0_1 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:23:54.375 00:23:54.375 --- 10.0.0.2 ping statistics --- 00:23:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.375 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:54.375 00:23:54.375 --- 10.0.0.1 ping statistics --- 00:23:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.375 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=586670 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 586670 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 586670 ']' 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:54.375 14:26:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.375 [2024-06-07 14:26:17.497987] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:23:54.375 [2024-06-07 14:26:17.498050] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:54.375 [2024-06-07 14:26:17.591643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.375 [2024-06-07 14:26:17.659325] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.375 [2024-06-07 14:26:17.659364] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.375 [2024-06-07 14:26:17.659372] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.375 [2024-06-07 14:26:17.659378] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.375 [2024-06-07 14:26:17.659384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.375 [2024-06-07 14:26:17.659915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:23:54.376 [2024-06-07 14:26:17.660037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:23:54.376 [2024-06-07 14:26:17.660174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.376 [2024-06-07 14:26:17.660175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:23:54.636 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:54.636 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:23:54.636 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.636 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:54.636 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 [2024-06-07 14:26:18.325269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 Malloc0 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:54.898 [2024-06-07 14:26:18.362776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:54.898 { 00:23:54.898 "params": { 00:23:54.898 "name": "Nvme$subsystem", 00:23:54.898 "trtype": "$TEST_TRANSPORT", 00:23:54.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.898 "adrfam": "ipv4", 00:23:54.898 "trsvcid": "$NVMF_PORT", 00:23:54.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.898 "hdgst": ${hdgst:-false}, 00:23:54.898 "ddgst": ${ddgst:-false} 00:23:54.898 }, 00:23:54.898 "method": "bdev_nvme_attach_controller" 00:23:54.898 } 00:23:54.898 EOF 00:23:54.898 )") 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:54.898 14:26:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:54.898 "params": { 00:23:54.898 "name": "Nvme1", 00:23:54.898 "trtype": "tcp", 00:23:54.898 "traddr": "10.0.0.2", 00:23:54.898 "adrfam": "ipv4", 00:23:54.898 "trsvcid": "4420", 00:23:54.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:54.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:54.898 "hdgst": false, 00:23:54.898 "ddgst": false 00:23:54.898 }, 00:23:54.898 "method": "bdev_nvme_attach_controller" 00:23:54.898 }' 00:23:54.898 [2024-06-07 14:26:18.416153] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:23:54.899 [2024-06-07 14:26:18.416227] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid587012 ] 00:23:54.899 [2024-06-07 14:26:18.488517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.159 [2024-06-07 14:26:18.560845] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.159 [2024-06-07 14:26:18.560965] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.159 [2024-06-07 14:26:18.560967] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.159 I/O targets: 00:23:55.159 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:55.159 00:23:55.159 00:23:55.159 CUnit - A unit testing framework for C - Version 2.1-3 00:23:55.159 http://cunit.sourceforge.net/ 00:23:55.159 00:23:55.159 00:23:55.159 Suite: bdevio tests on: Nvme1n1 00:23:55.159 Test: blockdev write read block ...passed 00:23:55.159 Test: blockdev write zeroes read block ...passed 00:23:55.159 Test: blockdev write zeroes read no split ...passed 00:23:55.420 Test: blockdev write zeroes read split ...passed 00:23:55.420 Test: blockdev write zeroes read split partial ...passed 00:23:55.420 Test: blockdev reset ...[2024-06-07 14:26:18.874479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:55.420 [2024-06-07 14:26:18.874534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183ced0 (9): Bad file descriptor 00:23:55.420 [2024-06-07 14:26:18.887619] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:55.420 passed 00:23:55.420 Test: blockdev write read 8 blocks ...passed 00:23:55.420 Test: blockdev write read size > 128k ...passed 00:23:55.420 Test: blockdev write read invalid size ...passed 00:23:55.420 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:55.420 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:55.420 Test: blockdev write read max offset ...passed 00:23:55.420 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:55.420 Test: blockdev writev readv 8 blocks ...passed 00:23:55.420 Test: blockdev writev readv 30 x 1block ...passed 00:23:55.681 Test: blockdev writev readv block ...passed 00:23:55.681 Test: blockdev writev readv size > 128k ...passed 00:23:55.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:55.681 Test: blockdev comparev and writev ...[2024-06-07 14:26:19.151953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.681 [2024-06-07 14:26:19.151977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.681 [2024-06-07 14:26:19.151988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.681 [2024-06-07 14:26:19.151994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.681 [2024-06-07 14:26:19.152476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.681 [2024-06-07 14:26:19.152485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.681 [2024-06-07 14:26:19.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.681 [2024-06-07 14:26:19.152499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.681 [2024-06-07 14:26:19.152963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.682 [2024-06-07 14:26:19.152971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.152980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.682 [2024-06-07 14:26:19.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.682 [2024-06-07 14:26:19.153487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.153497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:55.682 [2024-06-07 14:26:19.153502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.682 passed 00:23:55.682 Test: blockdev nvme passthru rw ...passed 00:23:55.682 Test: blockdev nvme passthru vendor specific ...[2024-06-07 14:26:19.239108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.682 [2024-06-07 14:26:19.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.239498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.682 [2024-06-07 14:26:19.239505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.239855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.682 [2024-06-07 14:26:19.239862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.682 [2024-06-07 14:26:19.240207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:55.682 [2024-06-07 14:26:19.240215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.682 passed 00:23:55.682 Test: blockdev nvme admin passthru ...passed 00:23:55.682 Test: blockdev copy ...passed 00:23:55.682 00:23:55.682 Run Summary: Type Total Ran Passed Failed Inactive 00:23:55.682 suites 1 1 n/a 0 0 00:23:55.682 tests 23 23 23 0 0 00:23:55.682 asserts 152 152 152 0 n/a 00:23:55.682 00:23:55.682 Elapsed time = 1.212 seconds 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.943 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.943 rmmod nvme_tcp 00:23:55.943 rmmod nvme_fabrics 00:23:55.943 rmmod nvme_keyring 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 586670 ']' 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 586670 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 586670 ']' 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 586670 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 586670 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 586670' 00:23:56.205 killing process with pid 586670 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 586670 00:23:56.205 14:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 586670 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.466 14:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.013 14:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:59.013 00:23:59.013 real 0m12.750s 00:23:59.013 user 0m13.218s 00:23:59.013 sys 0m6.880s 00:23:59.013 14:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:59.013 14:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:59.013 ************************************ 00:23:59.013 END TEST nvmf_bdevio_no_huge 00:23:59.013 ************************************ 00:23:59.013 14:26:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:59.013 14:26:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:59.013 14:26:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:59.013 14:26:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.013 ************************************ 00:23:59.013 START TEST nvmf_tls 00:23:59.013 ************************************ 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:59.013 * Looking for test storage... 00:23:59.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.013 14:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:24:07.252 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:07.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:07.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:07.253 Found net devices under 0000:31:00.0: cvl_0_0 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:07.253 Found net devices under 0000:31:00.1: cvl_0_1 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:24:07.253 00:24:07.253 --- 10.0.0.2 ping statistics --- 00:24:07.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.253 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:24:07.253 00:24:07.253 --- 10.0.0.1 ping statistics --- 00:24:07.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.253 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=591851 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 591851 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 591851 ']' 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:07.253 14:26:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.253 [2024-06-07 14:26:30.478978] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:07.253 [2024-06-07 14:26:30.479044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.253 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.253 [2024-06-07 14:26:30.576204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.253 [2024-06-07 14:26:30.623090] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.253 [2024-06-07 14:26:30.623145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.254 [2024-06-07 14:26:30.623154] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.254 [2024-06-07 14:26:30.623160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.254 [2024-06-07 14:26:30.623166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.254 [2024-06-07 14:26:30.623230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:07.825 true 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:07.825 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:08.087 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:08.087 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:08.087 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:08.348 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:08.348 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:08.348 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:08.348 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:08.348 14:26:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:08.610 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:08.610 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:08.872 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:09.132 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:09.132 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:09.393 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:09.393 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:09.393 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:09.393 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:09.393 14:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.LH1mVywmlF 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.r6A8ABPUOD 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.LH1mVywmlF 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.r6A8ABPUOD 00:24:09.654 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:09.915 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:10.176 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.LH1mVywmlF 00:24:10.176 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LH1mVywmlF 00:24:10.176 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.176 [2024-06-07 14:26:33.706734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.176 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.436 14:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:10.436 [2024-06-07 14:26:33.999468] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.436 [2024-06-07 14:26:33.999666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.436 14:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.698 malloc0 00:24:10.698 14:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.698 14:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LH1mVywmlF 00:24:10.958 [2024-06-07 14:26:34.418373] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:10.958 14:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.LH1mVywmlF 00:24:10.958 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.964 Initializing NVMe Controllers 00:24:20.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.964 Initialization complete. Launching workers. 00:24:20.964 ======================================================== 00:24:20.964 Latency(us) 00:24:20.964 Device Information : IOPS MiB/s Average min max 00:24:20.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19112.96 74.66 3348.53 1060.51 3999.62 00:24:20.964 ======================================================== 00:24:20.964 Total : 19112.96 74.66 3348.53 1060.51 3999.62 00:24:20.964 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LH1mVywmlF 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LH1mVywmlF' 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=594616 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 594616 /var/tmp/bdevperf.sock 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 594616 ']' 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:20.964 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.964 [2024-06-07 14:26:44.564303] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:20.964 [2024-06-07 14:26:44.564361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid594616 ] 00:24:20.964 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.225 [2024-06-07 14:26:44.617925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.225 [2024-06-07 14:26:44.645999] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.225 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:21.225 14:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:21.225 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LH1mVywmlF 00:24:21.225 [2024-06-07 14:26:44.847670] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.225 [2024-06-07 14:26:44.847724] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:21.486 TLSTESTn1 00:24:21.486 14:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:21.486 Running I/O for 10 seconds... 00:24:31.489 00:24:31.489 Latency(us) 00:24:31.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.489 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.489 Verification LBA range: start 0x0 length 0x2000 00:24:31.489 TLSTESTn1 : 10.01 5835.93 22.80 0.00 0.00 21900.79 4587.52 29054.29 00:24:31.489 =================================================================================================================== 00:24:31.489 Total : 5835.93 22.80 0.00 0.00 21900.79 4587.52 29054.29 00:24:31.489 0 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 594616 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 594616 ']' 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 594616 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 594616 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 594616' 00:24:31.489 killing process with pid 594616 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 594616 00:24:31.489 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.489 00:24:31.489 Latency(us) 00:24:31.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.489 =================================================================================================================== 00:24:31.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.489 [2024-06-07 14:26:55.132551] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.489 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 594616 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6A8ABPUOD 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6A8ABPUOD 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.r6A8ABPUOD 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.r6A8ABPUOD' 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=596744 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 596744 /var/tmp/bdevperf.sock 00:24:31.750 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 596744 ']' 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:31.751 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.751 [2024-06-07 14:26:55.259864] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:31.751 [2024-06-07 14:26:55.259910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596744 ] 00:24:31.751 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.751 [2024-06-07 14:26:55.305690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.751 [2024-06-07 14:26:55.333588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.r6A8ABPUOD 00:24:32.012 [2024-06-07 14:26:55.535124] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.012 [2024-06-07 14:26:55.535179] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:32.012 [2024-06-07 14:26:55.544041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.012 [2024-06-07 14:26:55.544061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aeb80 (107): Transport endpoint is not connected 00:24:32.012 [2024-06-07 14:26:55.545047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aeb80 (9): Bad file descriptor 00:24:32.012 [2024-06-07 14:26:55.546049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.012 [2024-06-07 14:26:55.546055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:32.012 [2024-06-07 14:26:55.546062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.012 request: 00:24:32.012 { 00:24:32.012 "name": "TLSTEST", 00:24:32.012 "trtype": "tcp", 00:24:32.012 "traddr": "10.0.0.2", 00:24:32.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.012 "adrfam": "ipv4", 00:24:32.012 "trsvcid": "4420", 00:24:32.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.012 "psk": "/tmp/tmp.r6A8ABPUOD", 00:24:32.012 "method": "bdev_nvme_attach_controller", 00:24:32.012 "req_id": 1 00:24:32.012 } 00:24:32.012 Got JSON-RPC error response 00:24:32.012 response: 00:24:32.012 { 00:24:32.012 "code": -5, 00:24:32.012 "message": "Input/output error" 00:24:32.012 } 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 596744 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 596744 ']' 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 596744 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 596744 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 596744' 00:24:32.012 killing process with pid 596744 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 596744 00:24:32.012 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.012 00:24:32.012 Latency(us) 00:24:32.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.012 =================================================================================================================== 00:24:32.012 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.012 [2024-06-07 14:26:55.615804] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:32.012 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 596744 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LH1mVywmlF 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LH1mVywmlF 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.LH1mVywmlF 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LH1mVywmlF' 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=596773 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 596773 /var/tmp/bdevperf.sock 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 596773 ']' 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.273 [2024-06-07 14:26:55.763238] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:32.273 [2024-06-07 14:26:55.763293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596773 ] 00:24:32.273 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.273 [2024-06-07 14:26:55.818862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.273 [2024-06-07 14:26:55.846563] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:32.273 14:26:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.LH1mVywmlF 00:24:32.534 [2024-06-07 14:26:56.044095] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.534 [2024-06-07 14:26:56.044152] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:32.534 [2024-06-07 14:26:56.055218] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:32.534 [2024-06-07 14:26:56.055235] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:32.534 [2024-06-07 14:26:56.055254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.534 [2024-06-07 14:26:56.056049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d0b80 (107): Transport endpoint is not connected 00:24:32.534 [2024-06-07 14:26:56.057045] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d0b80 (9): Bad file descriptor 00:24:32.534 [2024-06-07 14:26:56.058046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.534 [2024-06-07 14:26:56.058052] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:32.534 [2024-06-07 14:26:56.058059] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.534 request: 00:24:32.534 { 00:24:32.534 "name": "TLSTEST", 00:24:32.534 "trtype": "tcp", 00:24:32.534 "traddr": "10.0.0.2", 00:24:32.534 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:32.534 "adrfam": "ipv4", 00:24:32.534 "trsvcid": "4420", 00:24:32.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.534 "psk": "/tmp/tmp.LH1mVywmlF", 00:24:32.534 "method": "bdev_nvme_attach_controller", 00:24:32.534 "req_id": 1 00:24:32.534 } 00:24:32.534 Got JSON-RPC error response 00:24:32.534 response: 00:24:32.534 { 00:24:32.534 "code": -5, 00:24:32.534 "message": "Input/output error" 00:24:32.534 } 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 596773 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 596773 ']' 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 596773 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 596773 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 596773' 00:24:32.534 killing process with pid 596773 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 596773 00:24:32.534 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.534 00:24:32.534 Latency(us) 00:24:32.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.534 =================================================================================================================== 00:24:32.534 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.534 [2024-06-07 14:26:56.122890] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:32.534 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 596773 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LH1mVywmlF 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LH1mVywmlF 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.LH1mVywmlF 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LH1mVywmlF' 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=596788 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 596788 /var/tmp/bdevperf.sock 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 596788 ']' 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.795 [2024-06-07 14:26:56.280391] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:32.795 [2024-06-07 14:26:56.280449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596788 ] 00:24:32.795 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.795 [2024-06-07 14:26:56.334661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.795 [2024-06-07 14:26:56.362440] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:32.795 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LH1mVywmlF 00:24:33.057 [2024-06-07 14:26:56.620215] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.057 [2024-06-07 14:26:56.620269] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:33.057 [2024-06-07 14:26:56.626596] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:33.057 [2024-06-07 14:26:56.626612] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:33.057 [2024-06-07 14:26:56.626629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.057 [2024-06-07 14:26:56.627078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450b80 (107): Transport endpoint is not connected 00:24:33.057 [2024-06-07 14:26:56.628073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1450b80 (9): Bad file descriptor 00:24:33.057 [2024-06-07 14:26:56.629075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:33.057 [2024-06-07 14:26:56.629081] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:33.057 [2024-06-07 14:26:56.629088] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:33.057 request: 00:24:33.057 { 00:24:33.057 "name": "TLSTEST", 00:24:33.057 "trtype": "tcp", 00:24:33.057 "traddr": "10.0.0.2", 00:24:33.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.057 "adrfam": "ipv4", 00:24:33.057 "trsvcid": "4420", 00:24:33.057 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:33.057 "psk": "/tmp/tmp.LH1mVywmlF", 00:24:33.057 "method": "bdev_nvme_attach_controller", 00:24:33.057 "req_id": 1 00:24:33.057 } 00:24:33.057 Got JSON-RPC error response 00:24:33.057 response: 00:24:33.057 { 00:24:33.057 "code": -5, 00:24:33.057 "message": "Input/output error" 00:24:33.057 } 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 596788 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 596788 ']' 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 596788 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 596788 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 596788' 00:24:33.057 killing process with pid 596788 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 596788 00:24:33.057 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.057 00:24:33.057 Latency(us) 00:24:33.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.057 =================================================================================================================== 00:24:33.057 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.057 [2024-06-07 14:26:56.700215] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:33.057 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 596788 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:24:33.318 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=597024 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 597024 /var/tmp/bdevperf.sock 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 597024 ']' 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.319 [2024-06-07 14:26:56.820574] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:33.319 [2024-06-07 14:26:56.820614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597024 ] 00:24:33.319 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.319 [2024-06-07 14:26:56.866353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.319 [2024-06-07 14:26:56.894203] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:33.319 14:26:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:33.581 [2024-06-07 14:26:57.107782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.581 [2024-06-07 14:26:57.109331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2ed0 (9): Bad file descriptor 00:24:33.581 [2024-06-07 14:26:57.110330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.581 [2024-06-07 14:26:57.110337] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:33.581 [2024-06-07 14:26:57.110344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.581 request: 00:24:33.581 { 00:24:33.581 "name": "TLSTEST", 00:24:33.581 "trtype": "tcp", 00:24:33.581 "traddr": "10.0.0.2", 00:24:33.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.581 "adrfam": "ipv4", 00:24:33.581 "trsvcid": "4420", 00:24:33.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.581 "method": "bdev_nvme_attach_controller", 00:24:33.581 "req_id": 1 00:24:33.581 } 00:24:33.581 Got JSON-RPC error response 00:24:33.581 response: 00:24:33.581 { 00:24:33.581 "code": -5, 00:24:33.581 "message": "Input/output error" 00:24:33.581 } 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 597024 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 597024 ']' 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 597024 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 597024 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 597024' 00:24:33.581 killing process with pid 597024 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 597024 00:24:33.581 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.581 00:24:33.581 Latency(us) 00:24:33.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.581 =================================================================================================================== 00:24:33.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.581 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 597024 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 591851 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 591851 ']' 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 591851 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 591851 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 591851' 00:24:33.842 killing process with pid 591851 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 591851 00:24:33.842 [2024-06-07 14:26:57.327208] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 591851 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:33.842 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.5huBr61b5e 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.5huBr61b5e 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=597143 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 597143 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 597143 ']' 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:34.103 14:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.103 [2024-06-07 14:26:57.552159] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:34.103 [2024-06-07 14:26:57.552229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.103 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.103 [2024-06-07 14:26:57.639908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.103 [2024-06-07 14:26:57.668150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.103 [2024-06-07 14:26:57.668186] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.103 [2024-06-07 14:26:57.668192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.103 [2024-06-07 14:26:57.668201] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.103 [2024-06-07 14:26:57.668205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.103 [2024-06-07 14:26:57.668228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.674 14:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:34.674 14:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:34.674 14:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.674 14:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:34.674 14:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.945 14:26:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.945 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:24:34.945 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5huBr61b5e 00:24:34.945 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.945 [2024-06-07 14:26:58.496971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.945 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.251 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.251 [2024-06-07 14:26:58.805733] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.251 [2024-06-07 14:26:58.805908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.251 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.511 malloc0 00:24:35.511 14:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:35.511 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:35.771 [2024-06-07 14:26:59.240625] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5huBr61b5e 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5huBr61b5e' 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=597508 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 597508 /var/tmp/bdevperf.sock 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 597508 ']' 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:35.771 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.771 [2024-06-07 14:26:59.296790] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:35.771 [2024-06-07 14:26:59.296862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597508 ] 00:24:35.771 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.771 [2024-06-07 14:26:59.352003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.771 [2024-06-07 14:26:59.379798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.031 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:36.031 14:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:36.031 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:36.031 [2024-06-07 14:26:59.593525] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.031 [2024-06-07 14:26:59.593584] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:36.031 TLSTESTn1 00:24:36.291 14:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.291 Running I/O for 10 seconds... 00:24:46.291 00:24:46.291 Latency(us) 00:24:46.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.291 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.291 Verification LBA range: start 0x0 length 0x2000 00:24:46.291 TLSTESTn1 : 10.05 5742.26 22.43 0.00 0.00 22245.06 5898.24 65099.09 00:24:46.291 =================================================================================================================== 00:24:46.291 Total : 5742.26 22.43 0.00 0.00 22245.06 5898.24 65099.09 00:24:46.291 0 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 597508 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 597508 ']' 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 597508 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 597508 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 597508' 00:24:46.291 killing process with pid 597508 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 597508 00:24:46.291 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.291 00:24:46.291 Latency(us) 00:24:46.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.291 =================================================================================================================== 00:24:46.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.291 [2024-06-07 14:27:09.918915] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:46.291 14:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 597508 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.5huBr61b5e 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5huBr61b5e 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5huBr61b5e 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5huBr61b5e 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5huBr61b5e' 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=599972 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 599972 /var/tmp/bdevperf.sock 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 599972 ']' 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:46.553 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.553 [2024-06-07 14:27:10.082059] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:46.553 [2024-06-07 14:27:10.082119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599972 ] 00:24:46.553 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.553 [2024-06-07 14:27:10.136615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.553 [2024-06-07 14:27:10.164274] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:46.816 [2024-06-07 14:27:10.378054] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.816 [2024-06-07 14:27:10.378091] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:46.816 [2024-06-07 14:27:10.378096] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.5huBr61b5e 00:24:46.816 request: 00:24:46.816 { 00:24:46.816 "name": "TLSTEST", 00:24:46.816 "trtype": "tcp", 00:24:46.816 "traddr": "10.0.0.2", 00:24:46.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.816 "adrfam": "ipv4", 00:24:46.816 "trsvcid": "4420", 00:24:46.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.816 "psk": "/tmp/tmp.5huBr61b5e", 00:24:46.816 "method": "bdev_nvme_attach_controller", 00:24:46.816 "req_id": 1 00:24:46.816 } 00:24:46.816 Got JSON-RPC error response 00:24:46.816 response: 00:24:46.816 { 00:24:46.816 "code": -1, 00:24:46.816 "message": "Operation not permitted" 00:24:46.816 } 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 599972 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 599972 ']' 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 599972 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:46.816 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 599972 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 599972' 00:24:47.078 killing process with pid 599972 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 599972 00:24:47.078 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.078 00:24:47.078 Latency(us) 00:24:47.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.078 =================================================================================================================== 00:24:47.078 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 599972 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 597143 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 597143 ']' 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 597143 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 597143 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 597143' 00:24:47.078 killing process with pid 597143 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 597143 00:24:47.078 [2024-06-07 14:27:10.618551] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:47.078 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 597143 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=600301 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 600301 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 600301 ']' 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:47.340 14:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.340 [2024-06-07 14:27:10.790918] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:47.340 [2024-06-07 14:27:10.790976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.340 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.340 [2024-06-07 14:27:10.877009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.340 [2024-06-07 14:27:10.905739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.340 [2024-06-07 14:27:10.905772] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.340 [2024-06-07 14:27:10.905777] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.340 [2024-06-07 14:27:10.905782] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.340 [2024-06-07 14:27:10.905786] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.341 [2024-06-07 14:27:10.905800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5huBr61b5e 00:24:48.016 14:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:48.278 [2024-06-07 14:27:11.726019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.278 14:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:48.278 14:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:48.538 [2024-06-07 14:27:12.018734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.538 [2024-06-07 14:27:12.018900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.538 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:48.538 malloc0 00:24:48.798 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:48.798 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:49.059 [2024-06-07 14:27:12.477600] tcp.c:3580:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:49.059 [2024-06-07 14:27:12.477620] tcp.c:3666:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:49.059 [2024-06-07 14:27:12.477639] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:49.059 request: 00:24:49.059 { 00:24:49.059 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.059 "host": "nqn.2016-06.io.spdk:host1", 00:24:49.059 "psk": "/tmp/tmp.5huBr61b5e", 00:24:49.059 "method": "nvmf_subsystem_add_host", 00:24:49.059 "req_id": 1 00:24:49.059 } 00:24:49.059 Got JSON-RPC error response 00:24:49.059 response: 00:24:49.059 { 00:24:49.059 "code": -32603, 00:24:49.059 "message": "Internal error" 00:24:49.059 } 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 600301 ']' 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 600301' 00:24:49.059 killing process with pid 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 600301 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.5huBr61b5e 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=600774 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 600774 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 600774 ']' 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:49.059 14:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.320 [2024-06-07 14:27:12.731368] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:49.320 [2024-06-07 14:27:12.731425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.320 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.320 [2024-06-07 14:27:12.817465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.320 [2024-06-07 14:27:12.846409] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.320 [2024-06-07 14:27:12.846443] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.320 [2024-06-07 14:27:12.846448] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.320 [2024-06-07 14:27:12.846453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.320 [2024-06-07 14:27:12.846457] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.320 [2024-06-07 14:27:12.846477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5huBr61b5e 00:24:49.925 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:50.184 [2024-06-07 14:27:13.662746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.184 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:50.184 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:50.444 [2024-06-07 14:27:13.971496] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:50.444 [2024-06-07 14:27:13.971667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.444 14:27:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:50.704 malloc0 00:24:50.704 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:50.704 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:50.964 [2024-06-07 14:27:14.418454] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=601144 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 601144 /var/tmp/bdevperf.sock 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 601144 ']' 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:50.964 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.964 [2024-06-07 14:27:14.488407] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:50.964 [2024-06-07 14:27:14.488475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601144 ] 00:24:50.964 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.964 [2024-06-07 14:27:14.542369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.964 [2024-06-07 14:27:14.570134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.225 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:51.225 14:27:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:51.225 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:24:51.225 [2024-06-07 14:27:14.783772] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.225 [2024-06-07 14:27:14.783827] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:51.225 TLSTESTn1 00:24:51.487 14:27:14 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:51.487 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:51.487 "subsystems": [ 00:24:51.487 { 00:24:51.487 "subsystem": "keyring", 00:24:51.487 "config": [] 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "subsystem": "iobuf", 00:24:51.487 "config": [ 00:24:51.487 { 00:24:51.487 "method": "iobuf_set_options", 00:24:51.487 "params": { 00:24:51.487 "small_pool_count": 8192, 00:24:51.487 "large_pool_count": 1024, 00:24:51.487 "small_bufsize": 8192, 00:24:51.487 "large_bufsize": 135168 00:24:51.487 } 00:24:51.487 } 00:24:51.487 ] 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "subsystem": "sock", 00:24:51.487 "config": [ 00:24:51.487 { 00:24:51.487 "method": "sock_set_default_impl", 00:24:51.487 "params": { 00:24:51.487 "impl_name": "posix" 00:24:51.487 } 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "method": "sock_impl_set_options", 00:24:51.487 "params": { 00:24:51.487 "impl_name": "ssl", 00:24:51.487 "recv_buf_size": 4096, 00:24:51.487 "send_buf_size": 4096, 00:24:51.487 "enable_recv_pipe": true, 00:24:51.487 "enable_quickack": false, 00:24:51.487 "enable_placement_id": 0, 00:24:51.487 "enable_zerocopy_send_server": true, 00:24:51.487 "enable_zerocopy_send_client": false, 00:24:51.487 "zerocopy_threshold": 0, 00:24:51.487 "tls_version": 0, 00:24:51.487 "enable_ktls": false 00:24:51.487 } 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "method": "sock_impl_set_options", 00:24:51.487 "params": { 00:24:51.487 "impl_name": "posix", 00:24:51.487 "recv_buf_size": 2097152, 00:24:51.487 "send_buf_size": 2097152, 00:24:51.487 "enable_recv_pipe": true, 00:24:51.487 "enable_quickack": false, 00:24:51.487 "enable_placement_id": 0, 00:24:51.487 "enable_zerocopy_send_server": true, 00:24:51.487 "enable_zerocopy_send_client": false, 00:24:51.487 "zerocopy_threshold": 0, 00:24:51.487 "tls_version": 0, 00:24:51.487 "enable_ktls": false 00:24:51.487 } 00:24:51.487 } 00:24:51.487 ] 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "subsystem": "vmd", 00:24:51.487 "config": [] 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "subsystem": "accel", 00:24:51.487 "config": [ 00:24:51.487 { 00:24:51.487 "method": "accel_set_options", 00:24:51.487 "params": { 00:24:51.487 "small_cache_size": 128, 00:24:51.487 "large_cache_size": 16, 00:24:51.487 "task_count": 2048, 00:24:51.487 "sequence_count": 2048, 00:24:51.487 "buf_count": 2048 00:24:51.487 } 00:24:51.487 } 00:24:51.487 ] 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "subsystem": "bdev", 00:24:51.487 "config": [ 00:24:51.487 { 00:24:51.487 "method": "bdev_set_options", 00:24:51.487 "params": { 00:24:51.487 "bdev_io_pool_size": 65535, 00:24:51.487 "bdev_io_cache_size": 256, 00:24:51.487 "bdev_auto_examine": true, 00:24:51.487 "iobuf_small_cache_size": 128, 00:24:51.487 "iobuf_large_cache_size": 16 00:24:51.487 } 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "method": "bdev_raid_set_options", 00:24:51.487 "params": { 00:24:51.487 "process_window_size_kb": 1024 00:24:51.487 } 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "method": "bdev_iscsi_set_options", 00:24:51.487 "params": { 00:24:51.487 "timeout_sec": 30 00:24:51.487 } 00:24:51.487 }, 00:24:51.487 { 00:24:51.487 "method": "bdev_nvme_set_options", 00:24:51.487 "params": { 00:24:51.487 "action_on_timeout": "none", 00:24:51.487 "timeout_us": 0, 00:24:51.487 "timeout_admin_us": 0, 00:24:51.487 "keep_alive_timeout_ms": 10000, 00:24:51.487 "arbitration_burst": 0, 00:24:51.487 "low_priority_weight": 0, 00:24:51.487 "medium_priority_weight": 0, 00:24:51.487 "high_priority_weight": 0, 00:24:51.487 "nvme_adminq_poll_period_us": 10000, 00:24:51.488 "nvme_ioq_poll_period_us": 0, 00:24:51.488 "io_queue_requests": 0, 00:24:51.488 "delay_cmd_submit": true, 00:24:51.488 "transport_retry_count": 4, 00:24:51.488 "bdev_retry_count": 3, 00:24:51.488 "transport_ack_timeout": 0, 00:24:51.488 "ctrlr_loss_timeout_sec": 0, 00:24:51.488 "reconnect_delay_sec": 0, 00:24:51.488 "fast_io_fail_timeout_sec": 0, 00:24:51.488 "disable_auto_failback": false, 00:24:51.488 "generate_uuids": false, 00:24:51.488 "transport_tos": 0, 00:24:51.488 "nvme_error_stat": false, 00:24:51.488 "rdma_srq_size": 0, 00:24:51.488 "io_path_stat": false, 00:24:51.488 "allow_accel_sequence": false, 00:24:51.488 "rdma_max_cq_size": 0, 00:24:51.488 "rdma_cm_event_timeout_ms": 0, 00:24:51.488 "dhchap_digests": [ 00:24:51.488 "sha256", 00:24:51.488 "sha384", 00:24:51.488 "sha512" 00:24:51.488 ], 00:24:51.488 "dhchap_dhgroups": [ 00:24:51.488 "null", 00:24:51.488 "ffdhe2048", 00:24:51.488 "ffdhe3072", 00:24:51.488 "ffdhe4096", 00:24:51.488 "ffdhe6144", 00:24:51.488 "ffdhe8192" 00:24:51.488 ] 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "bdev_nvme_set_hotplug", 00:24:51.488 "params": { 00:24:51.488 "period_us": 100000, 00:24:51.488 "enable": false 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "bdev_malloc_create", 00:24:51.488 "params": { 00:24:51.488 "name": "malloc0", 00:24:51.488 "num_blocks": 8192, 00:24:51.488 "block_size": 4096, 00:24:51.488 "physical_block_size": 4096, 00:24:51.488 "uuid": "539a1ea1-cb8c-4fdb-8345-bfa6ee7168aa", 00:24:51.488 "optimal_io_boundary": 0 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "bdev_wait_for_examine" 00:24:51.488 } 00:24:51.488 ] 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "subsystem": "nbd", 00:24:51.488 "config": [] 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "subsystem": "scheduler", 00:24:51.488 "config": [ 00:24:51.488 { 00:24:51.488 "method": "framework_set_scheduler", 00:24:51.488 "params": { 00:24:51.488 "name": "static" 00:24:51.488 } 00:24:51.488 } 00:24:51.488 ] 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "subsystem": "nvmf", 00:24:51.488 "config": [ 00:24:51.488 { 00:24:51.488 "method": "nvmf_set_config", 00:24:51.488 "params": { 00:24:51.488 "discovery_filter": "match_any", 00:24:51.488 "admin_cmd_passthru": { 00:24:51.488 "identify_ctrlr": false 00:24:51.488 } 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_set_max_subsystems", 00:24:51.488 "params": { 00:24:51.488 "max_subsystems": 1024 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_set_crdt", 00:24:51.488 "params": { 00:24:51.488 "crdt1": 0, 00:24:51.488 "crdt2": 0, 00:24:51.488 "crdt3": 0 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_create_transport", 00:24:51.488 "params": { 00:24:51.488 "trtype": "TCP", 00:24:51.488 "max_queue_depth": 128, 00:24:51.488 "max_io_qpairs_per_ctrlr": 127, 00:24:51.488 "in_capsule_data_size": 4096, 00:24:51.488 "max_io_size": 131072, 00:24:51.488 "io_unit_size": 131072, 00:24:51.488 "max_aq_depth": 128, 00:24:51.488 "num_shared_buffers": 511, 00:24:51.488 "buf_cache_size": 4294967295, 00:24:51.488 "dif_insert_or_strip": false, 00:24:51.488 "zcopy": false, 00:24:51.488 "c2h_success": false, 00:24:51.488 "sock_priority": 0, 00:24:51.488 "abort_timeout_sec": 1, 00:24:51.488 "ack_timeout": 0, 00:24:51.488 "data_wr_pool_size": 0 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_create_subsystem", 00:24:51.488 "params": { 00:24:51.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.488 "allow_any_host": false, 00:24:51.488 "serial_number": "SPDK00000000000001", 00:24:51.488 "model_number": "SPDK bdev Controller", 00:24:51.488 "max_namespaces": 10, 00:24:51.488 "min_cntlid": 1, 00:24:51.488 "max_cntlid": 65519, 00:24:51.488 "ana_reporting": false 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_subsystem_add_host", 00:24:51.488 "params": { 00:24:51.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.488 "host": "nqn.2016-06.io.spdk:host1", 00:24:51.488 "psk": "/tmp/tmp.5huBr61b5e" 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_subsystem_add_ns", 00:24:51.488 "params": { 00:24:51.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.488 "namespace": { 00:24:51.488 "nsid": 1, 00:24:51.488 "bdev_name": "malloc0", 00:24:51.488 "nguid": "539A1EA1CB8C4FDB8345BFA6EE7168AA", 00:24:51.488 "uuid": "539a1ea1-cb8c-4fdb-8345-bfa6ee7168aa", 00:24:51.488 "no_auto_visible": false 00:24:51.488 } 00:24:51.488 } 00:24:51.488 }, 00:24:51.488 { 00:24:51.488 "method": "nvmf_subsystem_add_listener", 00:24:51.488 "params": { 00:24:51.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.488 "listen_address": { 00:24:51.488 "trtype": "TCP", 00:24:51.488 "adrfam": "IPv4", 00:24:51.488 "traddr": "10.0.0.2", 00:24:51.488 "trsvcid": "4420" 00:24:51.488 }, 00:24:51.488 "secure_channel": true 00:24:51.488 } 00:24:51.488 } 00:24:51.488 ] 00:24:51.488 } 00:24:51.488 ] 00:24:51.488 }' 00:24:51.488 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:51.749 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:51.749 "subsystems": [ 00:24:51.749 { 00:24:51.749 "subsystem": "keyring", 00:24:51.749 "config": [] 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "subsystem": "iobuf", 00:24:51.749 "config": [ 00:24:51.749 { 00:24:51.749 "method": "iobuf_set_options", 00:24:51.749 "params": { 00:24:51.749 "small_pool_count": 8192, 00:24:51.749 "large_pool_count": 1024, 00:24:51.749 "small_bufsize": 8192, 00:24:51.749 "large_bufsize": 135168 00:24:51.749 } 00:24:51.749 } 00:24:51.749 ] 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "subsystem": "sock", 00:24:51.749 "config": [ 00:24:51.749 { 00:24:51.749 "method": "sock_set_default_impl", 00:24:51.749 "params": { 00:24:51.749 "impl_name": "posix" 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "method": "sock_impl_set_options", 00:24:51.749 "params": { 00:24:51.749 "impl_name": "ssl", 00:24:51.749 "recv_buf_size": 4096, 00:24:51.749 "send_buf_size": 4096, 00:24:51.749 "enable_recv_pipe": true, 00:24:51.749 "enable_quickack": false, 00:24:51.749 "enable_placement_id": 0, 00:24:51.749 "enable_zerocopy_send_server": true, 00:24:51.749 "enable_zerocopy_send_client": false, 00:24:51.749 "zerocopy_threshold": 0, 00:24:51.749 "tls_version": 0, 00:24:51.749 "enable_ktls": false 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "method": "sock_impl_set_options", 00:24:51.749 "params": { 00:24:51.749 "impl_name": "posix", 00:24:51.749 "recv_buf_size": 2097152, 00:24:51.749 "send_buf_size": 2097152, 00:24:51.749 "enable_recv_pipe": true, 00:24:51.749 "enable_quickack": false, 00:24:51.749 "enable_placement_id": 0, 00:24:51.749 "enable_zerocopy_send_server": true, 00:24:51.749 "enable_zerocopy_send_client": false, 00:24:51.749 "zerocopy_threshold": 0, 00:24:51.749 "tls_version": 0, 00:24:51.749 "enable_ktls": false 00:24:51.749 } 00:24:51.749 } 00:24:51.749 ] 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "subsystem": "vmd", 00:24:51.749 "config": [] 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "subsystem": "accel", 00:24:51.749 "config": [ 00:24:51.749 { 00:24:51.749 "method": "accel_set_options", 00:24:51.749 "params": { 00:24:51.749 "small_cache_size": 128, 00:24:51.749 "large_cache_size": 16, 00:24:51.749 "task_count": 2048, 00:24:51.749 "sequence_count": 2048, 00:24:51.749 "buf_count": 2048 00:24:51.749 } 00:24:51.749 } 00:24:51.749 ] 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "subsystem": "bdev", 00:24:51.749 "config": [ 00:24:51.749 { 00:24:51.749 "method": "bdev_set_options", 00:24:51.749 "params": { 00:24:51.749 "bdev_io_pool_size": 65535, 00:24:51.749 "bdev_io_cache_size": 256, 00:24:51.749 "bdev_auto_examine": true, 00:24:51.749 "iobuf_small_cache_size": 128, 00:24:51.749 "iobuf_large_cache_size": 16 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "method": "bdev_raid_set_options", 00:24:51.749 "params": { 00:24:51.749 "process_window_size_kb": 1024 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "method": "bdev_iscsi_set_options", 00:24:51.749 "params": { 00:24:51.749 "timeout_sec": 30 00:24:51.749 } 00:24:51.749 }, 00:24:51.749 { 00:24:51.749 "method": "bdev_nvme_set_options", 00:24:51.749 "params": { 00:24:51.749 "action_on_timeout": "none", 00:24:51.749 "timeout_us": 0, 00:24:51.749 "timeout_admin_us": 0, 00:24:51.749 "keep_alive_timeout_ms": 10000, 00:24:51.749 "arbitration_burst": 0, 00:24:51.749 "low_priority_weight": 0, 00:24:51.749 "medium_priority_weight": 0, 00:24:51.749 "high_priority_weight": 0, 00:24:51.749 "nvme_adminq_poll_period_us": 10000, 00:24:51.749 "nvme_ioq_poll_period_us": 0, 00:24:51.749 "io_queue_requests": 512, 00:24:51.749 "delay_cmd_submit": true, 00:24:51.749 "transport_retry_count": 4, 00:24:51.749 "bdev_retry_count": 3, 00:24:51.749 "transport_ack_timeout": 0, 00:24:51.749 "ctrlr_loss_timeout_sec": 0, 00:24:51.749 "reconnect_delay_sec": 0, 00:24:51.749 "fast_io_fail_timeout_sec": 0, 00:24:51.749 "disable_auto_failback": false, 00:24:51.749 "generate_uuids": false, 00:24:51.749 "transport_tos": 0, 00:24:51.749 "nvme_error_stat": false, 00:24:51.749 "rdma_srq_size": 0, 00:24:51.749 "io_path_stat": false, 00:24:51.749 "allow_accel_sequence": false, 00:24:51.749 "rdma_max_cq_size": 0, 00:24:51.749 "rdma_cm_event_timeout_ms": 0, 00:24:51.749 "dhchap_digests": [ 00:24:51.749 "sha256", 00:24:51.750 "sha384", 00:24:51.750 "sha512" 00:24:51.750 ], 00:24:51.750 "dhchap_dhgroups": [ 00:24:51.750 "null", 00:24:51.750 "ffdhe2048", 00:24:51.750 "ffdhe3072", 00:24:51.750 "ffdhe4096", 00:24:51.750 "ffdhe6144", 00:24:51.750 "ffdhe8192" 00:24:51.750 ] 00:24:51.750 } 00:24:51.750 }, 00:24:51.750 { 00:24:51.750 "method": "bdev_nvme_attach_controller", 00:24:51.750 "params": { 00:24:51.750 "name": "TLSTEST", 00:24:51.750 "trtype": "TCP", 00:24:51.750 "adrfam": "IPv4", 00:24:51.750 "traddr": "10.0.0.2", 00:24:51.750 "trsvcid": "4420", 00:24:51.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.750 "prchk_reftag": false, 00:24:51.750 "prchk_guard": false, 00:24:51.750 "ctrlr_loss_timeout_sec": 0, 00:24:51.750 "reconnect_delay_sec": 0, 00:24:51.750 "fast_io_fail_timeout_sec": 0, 00:24:51.750 "psk": "/tmp/tmp.5huBr61b5e", 00:24:51.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.750 "hdgst": false, 00:24:51.750 "ddgst": false 00:24:51.750 } 00:24:51.750 }, 00:24:51.750 { 00:24:51.750 "method": "bdev_nvme_set_hotplug", 00:24:51.750 "params": { 00:24:51.750 "period_us": 100000, 00:24:51.750 "enable": false 00:24:51.750 } 00:24:51.750 }, 00:24:51.750 { 00:24:51.750 "method": "bdev_wait_for_examine" 00:24:51.750 } 00:24:51.750 ] 00:24:51.750 }, 00:24:51.750 { 00:24:51.750 "subsystem": "nbd", 00:24:51.750 "config": [] 00:24:51.750 } 00:24:51.750 ] 00:24:51.750 }' 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 601144 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 601144 ']' 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 601144 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:51.750 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 601144 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 601144' 00:24:52.011 killing process with pid 601144 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 601144 00:24:52.011 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.011 00:24:52.011 Latency(us) 00:24:52.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.011 =================================================================================================================== 00:24:52.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:52.011 [2024-06-07 14:27:15.411708] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 601144 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 600774 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 600774 ']' 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 600774 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 600774 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 600774' 00:24:52.011 killing process with pid 600774 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 600774 00:24:52.011 [2024-06-07 14:27:15.572149] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:52.011 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 600774 00:24:52.273 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:52.273 14:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:52.273 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:52.273 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.273 14:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:52.273 "subsystems": [ 00:24:52.273 { 00:24:52.273 "subsystem": "keyring", 00:24:52.273 "config": [] 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "subsystem": "iobuf", 00:24:52.273 "config": [ 00:24:52.273 { 00:24:52.273 "method": "iobuf_set_options", 00:24:52.273 "params": { 00:24:52.273 "small_pool_count": 8192, 00:24:52.273 "large_pool_count": 1024, 00:24:52.273 "small_bufsize": 8192, 00:24:52.273 "large_bufsize": 135168 00:24:52.273 } 00:24:52.273 } 00:24:52.273 ] 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "subsystem": "sock", 00:24:52.273 "config": [ 00:24:52.273 { 00:24:52.273 "method": "sock_set_default_impl", 00:24:52.273 "params": { 00:24:52.273 "impl_name": "posix" 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "sock_impl_set_options", 00:24:52.273 "params": { 00:24:52.273 "impl_name": "ssl", 00:24:52.273 "recv_buf_size": 4096, 00:24:52.273 "send_buf_size": 4096, 00:24:52.273 "enable_recv_pipe": true, 00:24:52.273 "enable_quickack": false, 00:24:52.273 "enable_placement_id": 0, 00:24:52.273 "enable_zerocopy_send_server": true, 00:24:52.273 "enable_zerocopy_send_client": false, 00:24:52.273 "zerocopy_threshold": 0, 00:24:52.273 "tls_version": 0, 00:24:52.273 "enable_ktls": false 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "sock_impl_set_options", 00:24:52.273 "params": { 00:24:52.273 "impl_name": "posix", 00:24:52.273 "recv_buf_size": 2097152, 00:24:52.273 "send_buf_size": 2097152, 00:24:52.273 "enable_recv_pipe": true, 00:24:52.273 "enable_quickack": false, 00:24:52.273 "enable_placement_id": 0, 00:24:52.273 "enable_zerocopy_send_server": true, 00:24:52.273 "enable_zerocopy_send_client": false, 00:24:52.273 "zerocopy_threshold": 0, 00:24:52.273 "tls_version": 0, 00:24:52.273 "enable_ktls": false 00:24:52.273 } 00:24:52.273 } 00:24:52.273 ] 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "subsystem": "vmd", 00:24:52.273 "config": [] 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "subsystem": "accel", 00:24:52.273 "config": [ 00:24:52.273 { 00:24:52.273 "method": "accel_set_options", 00:24:52.273 "params": { 00:24:52.273 "small_cache_size": 128, 00:24:52.273 "large_cache_size": 16, 00:24:52.273 "task_count": 2048, 00:24:52.273 "sequence_count": 2048, 00:24:52.273 "buf_count": 2048 00:24:52.273 } 00:24:52.273 } 00:24:52.273 ] 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "subsystem": "bdev", 00:24:52.273 "config": [ 00:24:52.273 { 00:24:52.273 "method": "bdev_set_options", 00:24:52.273 "params": { 00:24:52.273 "bdev_io_pool_size": 65535, 00:24:52.273 "bdev_io_cache_size": 256, 00:24:52.273 "bdev_auto_examine": true, 00:24:52.273 "iobuf_small_cache_size": 128, 00:24:52.273 "iobuf_large_cache_size": 16 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_raid_set_options", 00:24:52.273 "params": { 00:24:52.273 "process_window_size_kb": 1024 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_iscsi_set_options", 00:24:52.273 "params": { 00:24:52.273 "timeout_sec": 30 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_nvme_set_options", 00:24:52.273 "params": { 00:24:52.273 "action_on_timeout": "none", 00:24:52.273 "timeout_us": 0, 00:24:52.273 "timeout_admin_us": 0, 00:24:52.273 "keep_alive_timeout_ms": 10000, 00:24:52.273 "arbitration_burst": 0, 00:24:52.273 "low_priority_weight": 0, 00:24:52.273 "medium_priority_weight": 0, 00:24:52.273 "high_priority_weight": 0, 00:24:52.273 "nvme_adminq_poll_period_us": 10000, 00:24:52.273 "nvme_ioq_poll_period_us": 0, 00:24:52.273 "io_queue_requests": 0, 00:24:52.273 "delay_cmd_submit": true, 00:24:52.273 "transport_retry_count": 4, 00:24:52.273 "bdev_retry_count": 3, 00:24:52.273 "transport_ack_timeout": 0, 00:24:52.273 "ctrlr_loss_timeout_sec": 0, 00:24:52.273 "reconnect_delay_sec": 0, 00:24:52.273 "fast_io_fail_timeout_sec": 0, 00:24:52.273 "disable_auto_failback": false, 00:24:52.273 "generate_uuids": false, 00:24:52.273 "transport_tos": 0, 00:24:52.273 "nvme_error_stat": false, 00:24:52.273 "rdma_srq_size": 0, 00:24:52.273 "io_path_stat": false, 00:24:52.273 "allow_accel_sequence": false, 00:24:52.273 "rdma_max_cq_size": 0, 00:24:52.273 "rdma_cm_event_timeout_ms": 0, 00:24:52.273 "dhchap_digests": [ 00:24:52.273 "sha256", 00:24:52.273 "sha384", 00:24:52.273 "sha512" 00:24:52.273 ], 00:24:52.273 "dhchap_dhgroups": [ 00:24:52.273 "null", 00:24:52.273 "ffdhe2048", 00:24:52.273 "ffdhe3072", 00:24:52.273 "ffdhe4096", 00:24:52.273 "ffdhe6144", 00:24:52.273 "ffdhe8192" 00:24:52.273 ] 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_nvme_set_hotplug", 00:24:52.273 "params": { 00:24:52.273 "period_us": 100000, 00:24:52.273 "enable": false 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_malloc_create", 00:24:52.273 "params": { 00:24:52.273 "name": "malloc0", 00:24:52.273 "num_blocks": 8192, 00:24:52.273 "block_size": 4096, 00:24:52.273 "physical_block_size": 4096, 00:24:52.273 "uuid": "539a1ea1-cb8c-4fdb-8345-bfa6ee7168aa", 00:24:52.273 "optimal_io_boundary": 0 00:24:52.273 } 00:24:52.273 }, 00:24:52.273 { 00:24:52.273 "method": "bdev_wait_for_examine" 00:24:52.274 } 00:24:52.274 ] 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "subsystem": "nbd", 00:24:52.274 "config": [] 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "subsystem": "scheduler", 00:24:52.274 "config": [ 00:24:52.274 { 00:24:52.274 "method": "framework_set_scheduler", 00:24:52.274 "params": { 00:24:52.274 "name": "static" 00:24:52.274 } 00:24:52.274 } 00:24:52.274 ] 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "subsystem": "nvmf", 00:24:52.274 "config": [ 00:24:52.274 { 00:24:52.274 "method": "nvmf_set_config", 00:24:52.274 "params": { 00:24:52.274 "discovery_filter": "match_any", 00:24:52.274 "admin_cmd_passthru": { 00:24:52.274 "identify_ctrlr": false 00:24:52.274 } 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_set_max_subsystems", 00:24:52.274 "params": { 00:24:52.274 "max_subsystems": 1024 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_set_crdt", 00:24:52.274 "params": { 00:24:52.274 "crdt1": 0, 00:24:52.274 "crdt2": 0, 00:24:52.274 "crdt3": 0 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_create_transport", 00:24:52.274 "params": { 00:24:52.274 "trtype": "TCP", 00:24:52.274 "max_queue_depth": 128, 00:24:52.274 "max_io_qpairs_per_ctrlr": 127, 00:24:52.274 "in_capsule_data_size": 4096, 00:24:52.274 "max_io_size": 131072, 00:24:52.274 "io_unit_size": 131072, 00:24:52.274 "max_aq_depth": 128, 00:24:52.274 "num_shared_buffers": 511, 00:24:52.274 "buf_cache_size": 4294967295, 00:24:52.274 "dif_insert_or_strip": false, 00:24:52.274 "zcopy": false, 00:24:52.274 "c2h_success": false, 00:24:52.274 "sock_priority": 0, 00:24:52.274 "abort_timeout_sec": 1, 00:24:52.274 "ack_timeout": 0, 00:24:52.274 "data_wr_pool_size": 0 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_create_subsystem", 00:24:52.274 "params": { 00:24:52.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.274 "allow_any_host": false, 00:24:52.274 "serial_number": "SPDK00000000000001", 00:24:52.274 "model_number": "SPDK bdev Controller", 00:24:52.274 "max_namespaces": 10, 00:24:52.274 "min_cntlid": 1, 00:24:52.274 "max_cntlid": 65519, 00:24:52.274 "ana_reporting": false 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_subsystem_add_host", 00:24:52.274 "params": { 00:24:52.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.274 "host": "nqn.2016-06.io.spdk:host1", 00:24:52.274 "psk": "/tmp/tmp.5huBr61b5e" 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_subsystem_add_ns", 00:24:52.274 "params": { 00:24:52.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.274 "namespace": { 00:24:52.274 "nsid": 1, 00:24:52.274 "bdev_name": "malloc0", 00:24:52.274 "nguid": "539A1EA1CB8C4FDB8345BFA6EE7168AA", 00:24:52.274 "uuid": "539a1ea1-cb8c-4fdb-8345-bfa6ee7168aa", 00:24:52.274 "no_auto_visible": false 00:24:52.274 } 00:24:52.274 } 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "method": "nvmf_subsystem_add_listener", 00:24:52.274 "params": { 00:24:52.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.274 "listen_address": { 00:24:52.274 "trtype": "TCP", 00:24:52.274 "adrfam": "IPv4", 00:24:52.274 "traddr": "10.0.0.2", 00:24:52.274 "trsvcid": "4420" 00:24:52.274 }, 00:24:52.274 "secure_channel": true 00:24:52.274 } 00:24:52.274 } 00:24:52.274 ] 00:24:52.274 } 00:24:52.274 ] 00:24:52.274 }' 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=601265 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 601265 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 601265 ']' 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:52.274 14:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.274 [2024-06-07 14:27:15.743019] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:52.274 [2024-06-07 14:27:15.743076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.274 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.274 [2024-06-07 14:27:15.829009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.274 [2024-06-07 14:27:15.856670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.274 [2024-06-07 14:27:15.856702] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.274 [2024-06-07 14:27:15.856708] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.274 [2024-06-07 14:27:15.856713] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.274 [2024-06-07 14:27:15.856717] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.274 [2024-06-07 14:27:15.856759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.536 [2024-06-07 14:27:16.033994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.536 [2024-06-07 14:27:16.049964] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:52.536 [2024-06-07 14:27:16.066021] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:52.536 [2024-06-07 14:27:16.078352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=601522 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 601522 /var/tmp/bdevperf.sock 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 601522 ']' 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.107 14:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:53.107 "subsystems": [ 00:24:53.107 { 00:24:53.107 "subsystem": "keyring", 00:24:53.107 "config": [] 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "subsystem": "iobuf", 00:24:53.107 "config": [ 00:24:53.107 { 00:24:53.107 "method": "iobuf_set_options", 00:24:53.107 "params": { 00:24:53.107 "small_pool_count": 8192, 00:24:53.107 "large_pool_count": 1024, 00:24:53.107 "small_bufsize": 8192, 00:24:53.107 "large_bufsize": 135168 00:24:53.107 } 00:24:53.107 } 00:24:53.107 ] 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "subsystem": "sock", 00:24:53.107 "config": [ 00:24:53.107 { 00:24:53.107 "method": "sock_set_default_impl", 00:24:53.107 "params": { 00:24:53.107 "impl_name": "posix" 00:24:53.107 } 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "method": "sock_impl_set_options", 00:24:53.107 "params": { 00:24:53.107 "impl_name": "ssl", 00:24:53.107 "recv_buf_size": 4096, 00:24:53.107 "send_buf_size": 4096, 00:24:53.107 "enable_recv_pipe": true, 00:24:53.107 "enable_quickack": false, 00:24:53.107 "enable_placement_id": 0, 00:24:53.107 "enable_zerocopy_send_server": true, 00:24:53.107 "enable_zerocopy_send_client": false, 00:24:53.107 "zerocopy_threshold": 0, 00:24:53.107 "tls_version": 0, 00:24:53.107 "enable_ktls": false 00:24:53.107 } 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "method": "sock_impl_set_options", 00:24:53.107 "params": { 00:24:53.107 "impl_name": "posix", 00:24:53.107 "recv_buf_size": 2097152, 00:24:53.107 "send_buf_size": 2097152, 00:24:53.107 "enable_recv_pipe": true, 00:24:53.107 "enable_quickack": false, 00:24:53.107 "enable_placement_id": 0, 00:24:53.107 "enable_zerocopy_send_server": true, 00:24:53.107 "enable_zerocopy_send_client": false, 00:24:53.107 "zerocopy_threshold": 0, 00:24:53.107 "tls_version": 0, 00:24:53.107 "enable_ktls": false 00:24:53.107 } 00:24:53.107 } 00:24:53.107 ] 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "subsystem": "vmd", 00:24:53.107 "config": [] 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "subsystem": "accel", 00:24:53.107 "config": [ 00:24:53.107 { 00:24:53.107 "method": "accel_set_options", 00:24:53.107 "params": { 00:24:53.107 "small_cache_size": 128, 00:24:53.107 "large_cache_size": 16, 00:24:53.107 "task_count": 2048, 00:24:53.107 "sequence_count": 2048, 00:24:53.107 "buf_count": 2048 00:24:53.107 } 00:24:53.107 } 00:24:53.107 ] 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "subsystem": "bdev", 00:24:53.107 "config": [ 00:24:53.107 { 00:24:53.107 "method": "bdev_set_options", 00:24:53.107 "params": { 00:24:53.107 "bdev_io_pool_size": 65535, 00:24:53.107 "bdev_io_cache_size": 256, 00:24:53.107 "bdev_auto_examine": true, 00:24:53.107 "iobuf_small_cache_size": 128, 00:24:53.107 "iobuf_large_cache_size": 16 00:24:53.107 } 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "method": "bdev_raid_set_options", 00:24:53.107 "params": { 00:24:53.107 "process_window_size_kb": 1024 00:24:53.107 } 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "method": "bdev_iscsi_set_options", 00:24:53.107 "params": { 00:24:53.107 "timeout_sec": 30 00:24:53.107 } 00:24:53.107 }, 00:24:53.107 { 00:24:53.107 "method": "bdev_nvme_set_options", 00:24:53.107 "params": { 00:24:53.107 "action_on_timeout": "none", 00:24:53.107 "timeout_us": 0, 00:24:53.107 "timeout_admin_us": 0, 00:24:53.107 "keep_alive_timeout_ms": 10000, 00:24:53.107 "arbitration_burst": 0, 00:24:53.107 "low_priority_weight": 0, 00:24:53.107 "medium_priority_weight": 0, 00:24:53.107 "high_priority_weight": 0, 00:24:53.107 "nvme_adminq_poll_period_us": 10000, 00:24:53.107 "nvme_ioq_poll_period_us": 0, 00:24:53.107 "io_queue_requests": 512, 00:24:53.107 "delay_cmd_submit": true, 00:24:53.107 "transport_retry_count": 4, 00:24:53.107 "bdev_retry_count": 3, 00:24:53.107 "transport_ack_timeout": 0, 00:24:53.107 "ctrlr_loss_timeout_sec": 0, 00:24:53.107 "reconnect_delay_sec": 0, 00:24:53.107 "fast_io_fail_timeout_sec": 0, 00:24:53.108 "disable_auto_failback": false, 00:24:53.108 "generate_uuids": false, 00:24:53.108 "transport_tos": 0, 00:24:53.108 "nvme_error_stat": false, 00:24:53.108 "rdma_srq_size": 0, 00:24:53.108 "io_path_stat": false, 00:24:53.108 "allow_accel_sequence": false, 00:24:53.108 "rdma_max_cq_size": 0, 00:24:53.108 "rdma_cm_event_timeout_ms": 0, 00:24:53.108 "dhchap_digests": [ 00:24:53.108 "sha256", 00:24:53.108 "sha384", 00:24:53.108 "sha512" 00:24:53.108 ], 00:24:53.108 "dhchap_dhgroups": [ 00:24:53.108 "null", 00:24:53.108 "ffdhe2048", 00:24:53.108 "ffdhe3072", 00:24:53.108 "ffdhe4096", 00:24:53.108 "ffdhe6144", 00:24:53.108 "ffdhe8192" 00:24:53.108 ] 00:24:53.108 } 00:24:53.108 }, 00:24:53.108 { 00:24:53.108 "method": "bdev_nvme_attach_controller", 00:24:53.108 "params": { 00:24:53.108 "name": "TLSTEST", 00:24:53.108 "trtype": "TCP", 00:24:53.108 "adrfam": "IPv4", 00:24:53.108 "traddr": "10.0.0.2", 00:24:53.108 "trsvcid": "4420", 00:24:53.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.108 "prchk_reftag": false, 00:24:53.108 "prchk_guard": false, 00:24:53.108 "ctrlr_loss_timeout_sec": 0, 00:24:53.108 "reconnect_delay_sec": 0, 00:24:53.108 "fast_io_fail_timeout_sec": 0, 00:24:53.108 "psk": "/tmp/tmp.5huBr61b5e", 00:24:53.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.108 "hdgst": false, 00:24:53.108 "ddgst": false 00:24:53.108 } 00:24:53.108 }, 00:24:53.108 { 00:24:53.108 "method": "bdev_nvme_set_hotplug", 00:24:53.108 "params": { 00:24:53.108 "period_us": 100000, 00:24:53.108 "enable": false 00:24:53.108 } 00:24:53.108 }, 00:24:53.108 { 00:24:53.108 "method": "bdev_wait_for_examine" 00:24:53.108 } 00:24:53.108 ] 00:24:53.108 }, 00:24:53.108 { 00:24:53.108 "subsystem": "nbd", 00:24:53.108 "config": [] 00:24:53.108 } 00:24:53.108 ] 00:24:53.108 }' 00:24:53.108 [2024-06-07 14:27:16.583320] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:24:53.108 [2024-06-07 14:27:16.583373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601522 ] 00:24:53.108 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.108 [2024-06-07 14:27:16.637945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.108 [2024-06-07 14:27:16.666049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.368 [2024-06-07 14:27:16.784824] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.368 [2024-06-07 14:27:16.784892] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:53.939 14:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:53.939 14:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:24:53.939 14:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:53.939 Running I/O for 10 seconds... 00:25:03.931 00:25:03.931 Latency(us) 00:25:03.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.931 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:03.931 Verification LBA range: start 0x0 length 0x2000 00:25:03.931 TLSTESTn1 : 10.05 6168.06 24.09 0.00 0.00 20693.27 4505.60 45875.20 00:25:03.931 =================================================================================================================== 00:25:03.931 Total : 6168.06 24.09 0.00 0.00 20693.27 4505.60 45875.20 00:25:03.931 0 00:25:03.931 14:27:27 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.931 14:27:27 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 601522 00:25:03.932 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 601522 ']' 00:25:03.932 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 601522 00:25:03.932 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:03.932 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:03.932 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 601522 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 601522' 00:25:04.192 killing process with pid 601522 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 601522 00:25:04.192 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.192 00:25:04.192 Latency(us) 00:25:04.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.192 =================================================================================================================== 00:25:04.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.192 [2024-06-07 14:27:27.589955] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 601522 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 601265 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 601265 ']' 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 601265 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 601265 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 601265' 00:25:04.192 killing process with pid 601265 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 601265 00:25:04.192 [2024-06-07 14:27:27.750290] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:04.192 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 601265 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=603626 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 603626 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 603626 ']' 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:04.453 14:27:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.453 [2024-06-07 14:27:27.924226] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:04.453 [2024-06-07 14:27:27.924284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.453 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.453 [2024-06-07 14:27:27.995898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.453 [2024-06-07 14:27:28.028593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.453 [2024-06-07 14:27:28.028633] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.453 [2024-06-07 14:27:28.028641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.453 [2024-06-07 14:27:28.028648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.453 [2024-06-07 14:27:28.028654] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.453 [2024-06-07 14:27:28.028681] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.123 14:27:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:05.123 14:27:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:05.123 14:27:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.5huBr61b5e 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.5huBr61b5e 00:25:05.124 14:27:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:05.384 [2024-06-07 14:27:28.861300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.384 14:27:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:05.645 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:05.645 [2024-06-07 14:27:29.190115] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.645 [2024-06-07 14:27:29.190308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.645 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:05.906 malloc0 00:25:05.906 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:05.906 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5huBr61b5e 00:25:06.166 [2024-06-07 14:27:29.689902] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=604047 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 604047 /var/tmp/bdevperf.sock 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 604047 ']' 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:06.166 14:27:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.166 [2024-06-07 14:27:29.751812] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:06.166 [2024-06-07 14:27:29.751866] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604047 ] 00:25:06.166 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.427 [2024-06-07 14:27:29.830810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.427 [2024-06-07 14:27:29.859457] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.997 14:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:06.997 14:27:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:06.997 14:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5huBr61b5e 00:25:07.257 14:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:07.257 [2024-06-07 14:27:30.796242] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.257 nvme0n1 00:25:07.257 14:27:30 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.516 Running I/O for 1 seconds... 00:25:08.458 00:25:08.459 Latency(us) 00:25:08.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.459 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:08.459 Verification LBA range: start 0x0 length 0x2000 00:25:08.459 nvme0n1 : 1.02 5952.06 23.25 0.00 0.00 21326.48 5761.71 40413.87 00:25:08.459 =================================================================================================================== 00:25:08.459 Total : 5952.06 23.25 0.00 0.00 21326.48 5761.71 40413.87 00:25:08.459 0 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 604047 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 604047 ']' 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 604047 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:08.459 14:27:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 604047 00:25:08.459 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:08.459 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:08.459 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 604047' 00:25:08.459 killing process with pid 604047 00:25:08.459 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 604047 00:25:08.459 Received shutdown signal, test time was about 1.000000 seconds 00:25:08.459 00:25:08.459 Latency(us) 00:25:08.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.459 =================================================================================================================== 00:25:08.459 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.459 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 604047 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 603626 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 603626 ']' 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 603626 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 603626 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 603626' 00:25:08.720 killing process with pid 603626 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 603626 00:25:08.720 [2024-06-07 14:27:32.202633] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 603626 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=604580 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 604580 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 604580 ']' 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:08.720 14:27:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.981 [2024-06-07 14:27:32.386596] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:08.981 [2024-06-07 14:27:32.386649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.981 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.981 [2024-06-07 14:27:32.457864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.981 [2024-06-07 14:27:32.487602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.981 [2024-06-07 14:27:32.487639] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.981 [2024-06-07 14:27:32.487646] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.981 [2024-06-07 14:27:32.487653] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.981 [2024-06-07 14:27:32.487658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.981 [2024-06-07 14:27:32.487678] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.552 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:09.552 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:09.552 14:27:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.552 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:09.552 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.811 [2024-06-07 14:27:33.207398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.811 malloc0 00:25:09.811 [2024-06-07 14:27:33.234138] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.811 [2024-06-07 14:27:33.234335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=604778 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 604778 /var/tmp/bdevperf.sock 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 604778 ']' 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:09.811 14:27:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.811 [2024-06-07 14:27:33.310013] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:09.811 [2024-06-07 14:27:33.310060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604778 ] 00:25:09.812 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.812 [2024-06-07 14:27:33.388021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.812 [2024-06-07 14:27:33.416609] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.751 14:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:10.751 14:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:10.751 14:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5huBr61b5e 00:25:10.751 14:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:10.751 [2024-06-07 14:27:34.393313] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.011 nvme0n1 00:25:11.011 14:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.011 Running I/O for 1 seconds... 00:25:12.395 00:25:12.395 Latency(us) 00:25:12.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.395 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:12.395 Verification LBA range: start 0x0 length 0x2000 00:25:12.395 nvme0n1 : 1.04 5383.23 21.03 0.00 0.00 23349.34 4669.44 33423.36 00:25:12.395 =================================================================================================================== 00:25:12.395 Total : 5383.23 21.03 0.00 0.00 23349.34 4669.44 33423.36 00:25:12.395 0 00:25:12.395 14:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:12.395 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:12.395 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.395 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:12.395 14:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:12.395 "subsystems": [ 00:25:12.395 { 00:25:12.395 "subsystem": "keyring", 00:25:12.395 "config": [ 00:25:12.395 { 00:25:12.395 "method": "keyring_file_add_key", 00:25:12.395 "params": { 00:25:12.395 "name": "key0", 00:25:12.395 "path": "/tmp/tmp.5huBr61b5e" 00:25:12.395 } 00:25:12.395 } 00:25:12.395 ] 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "subsystem": "iobuf", 00:25:12.395 "config": [ 00:25:12.395 { 00:25:12.395 "method": "iobuf_set_options", 00:25:12.395 "params": { 00:25:12.395 "small_pool_count": 8192, 00:25:12.395 "large_pool_count": 1024, 00:25:12.395 "small_bufsize": 8192, 00:25:12.395 "large_bufsize": 135168 00:25:12.395 } 00:25:12.395 } 00:25:12.395 ] 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "subsystem": "sock", 00:25:12.395 "config": [ 00:25:12.395 { 00:25:12.395 "method": "sock_set_default_impl", 00:25:12.395 "params": { 00:25:12.395 "impl_name": "posix" 00:25:12.395 } 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "method": "sock_impl_set_options", 00:25:12.395 "params": { 00:25:12.395 "impl_name": "ssl", 00:25:12.395 "recv_buf_size": 4096, 00:25:12.395 "send_buf_size": 4096, 00:25:12.395 "enable_recv_pipe": true, 00:25:12.395 "enable_quickack": false, 00:25:12.395 "enable_placement_id": 0, 00:25:12.395 "enable_zerocopy_send_server": true, 00:25:12.395 "enable_zerocopy_send_client": false, 00:25:12.395 "zerocopy_threshold": 0, 00:25:12.395 "tls_version": 0, 00:25:12.395 "enable_ktls": false 00:25:12.395 } 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "method": "sock_impl_set_options", 00:25:12.395 "params": { 00:25:12.395 "impl_name": "posix", 00:25:12.395 "recv_buf_size": 2097152, 00:25:12.395 "send_buf_size": 2097152, 00:25:12.395 "enable_recv_pipe": true, 00:25:12.395 "enable_quickack": false, 00:25:12.395 "enable_placement_id": 0, 00:25:12.395 "enable_zerocopy_send_server": true, 00:25:12.395 "enable_zerocopy_send_client": false, 00:25:12.395 "zerocopy_threshold": 0, 00:25:12.395 "tls_version": 0, 00:25:12.395 "enable_ktls": false 00:25:12.395 } 00:25:12.395 } 00:25:12.395 ] 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "subsystem": "vmd", 00:25:12.395 "config": [] 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "subsystem": "accel", 00:25:12.395 "config": [ 00:25:12.395 { 00:25:12.395 "method": "accel_set_options", 00:25:12.395 "params": { 00:25:12.395 "small_cache_size": 128, 00:25:12.395 "large_cache_size": 16, 00:25:12.395 "task_count": 2048, 00:25:12.395 "sequence_count": 2048, 00:25:12.395 "buf_count": 2048 00:25:12.395 } 00:25:12.395 } 00:25:12.395 ] 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "subsystem": "bdev", 00:25:12.395 "config": [ 00:25:12.395 { 00:25:12.395 "method": "bdev_set_options", 00:25:12.395 "params": { 00:25:12.395 "bdev_io_pool_size": 65535, 00:25:12.395 "bdev_io_cache_size": 256, 00:25:12.395 "bdev_auto_examine": true, 00:25:12.395 "iobuf_small_cache_size": 128, 00:25:12.395 "iobuf_large_cache_size": 16 00:25:12.395 } 00:25:12.395 }, 00:25:12.395 { 00:25:12.395 "method": "bdev_raid_set_options", 00:25:12.395 "params": { 00:25:12.395 "process_window_size_kb": 1024 00:25:12.395 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "bdev_iscsi_set_options", 00:25:12.396 "params": { 00:25:12.396 "timeout_sec": 30 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "bdev_nvme_set_options", 00:25:12.396 "params": { 00:25:12.396 "action_on_timeout": "none", 00:25:12.396 "timeout_us": 0, 00:25:12.396 "timeout_admin_us": 0, 00:25:12.396 "keep_alive_timeout_ms": 10000, 00:25:12.396 "arbitration_burst": 0, 00:25:12.396 "low_priority_weight": 0, 00:25:12.396 "medium_priority_weight": 0, 00:25:12.396 "high_priority_weight": 0, 00:25:12.396 "nvme_adminq_poll_period_us": 10000, 00:25:12.396 "nvme_ioq_poll_period_us": 0, 00:25:12.396 "io_queue_requests": 0, 00:25:12.396 "delay_cmd_submit": true, 00:25:12.396 "transport_retry_count": 4, 00:25:12.396 "bdev_retry_count": 3, 00:25:12.396 "transport_ack_timeout": 0, 00:25:12.396 "ctrlr_loss_timeout_sec": 0, 00:25:12.396 "reconnect_delay_sec": 0, 00:25:12.396 "fast_io_fail_timeout_sec": 0, 00:25:12.396 "disable_auto_failback": false, 00:25:12.396 "generate_uuids": false, 00:25:12.396 "transport_tos": 0, 00:25:12.396 "nvme_error_stat": false, 00:25:12.396 "rdma_srq_size": 0, 00:25:12.396 "io_path_stat": false, 00:25:12.396 "allow_accel_sequence": false, 00:25:12.396 "rdma_max_cq_size": 0, 00:25:12.396 "rdma_cm_event_timeout_ms": 0, 00:25:12.396 "dhchap_digests": [ 00:25:12.396 "sha256", 00:25:12.396 "sha384", 00:25:12.396 "sha512" 00:25:12.396 ], 00:25:12.396 "dhchap_dhgroups": [ 00:25:12.396 "null", 00:25:12.396 "ffdhe2048", 00:25:12.396 "ffdhe3072", 00:25:12.396 "ffdhe4096", 00:25:12.396 "ffdhe6144", 00:25:12.396 "ffdhe8192" 00:25:12.396 ] 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "bdev_nvme_set_hotplug", 00:25:12.396 "params": { 00:25:12.396 "period_us": 100000, 00:25:12.396 "enable": false 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "bdev_malloc_create", 00:25:12.396 "params": { 00:25:12.396 "name": "malloc0", 00:25:12.396 "num_blocks": 8192, 00:25:12.396 "block_size": 4096, 00:25:12.396 "physical_block_size": 4096, 00:25:12.396 "uuid": "78aa3565-6e0d-4f65-be23-bc361a244131", 00:25:12.396 "optimal_io_boundary": 0 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "bdev_wait_for_examine" 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "nbd", 00:25:12.396 "config": [] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "scheduler", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "framework_set_scheduler", 00:25:12.396 "params": { 00:25:12.396 "name": "static" 00:25:12.396 } 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "nvmf", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "nvmf_set_config", 00:25:12.396 "params": { 00:25:12.396 "discovery_filter": "match_any", 00:25:12.396 "admin_cmd_passthru": { 00:25:12.396 "identify_ctrlr": false 00:25:12.396 } 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_set_max_subsystems", 00:25:12.396 "params": { 00:25:12.396 "max_subsystems": 1024 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_set_crdt", 00:25:12.396 "params": { 00:25:12.396 "crdt1": 0, 00:25:12.396 "crdt2": 0, 00:25:12.396 "crdt3": 0 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_create_transport", 00:25:12.396 "params": { 00:25:12.396 "trtype": "TCP", 00:25:12.396 "max_queue_depth": 128, 00:25:12.396 "max_io_qpairs_per_ctrlr": 127, 00:25:12.396 "in_capsule_data_size": 4096, 00:25:12.396 "max_io_size": 131072, 00:25:12.396 "io_unit_size": 131072, 00:25:12.396 "max_aq_depth": 128, 00:25:12.396 "num_shared_buffers": 511, 00:25:12.396 "buf_cache_size": 4294967295, 00:25:12.396 "dif_insert_or_strip": false, 00:25:12.396 "zcopy": false, 00:25:12.396 "c2h_success": false, 00:25:12.396 "sock_priority": 0, 00:25:12.396 "abort_timeout_sec": 1, 00:25:12.396 "ack_timeout": 0, 00:25:12.396 "data_wr_pool_size": 0 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_create_subsystem", 00:25:12.396 "params": { 00:25:12.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.396 "allow_any_host": false, 00:25:12.396 "serial_number": "00000000000000000000", 00:25:12.396 "model_number": "SPDK bdev Controller", 00:25:12.396 "max_namespaces": 32, 00:25:12.396 "min_cntlid": 1, 00:25:12.396 "max_cntlid": 65519, 00:25:12.396 "ana_reporting": false 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_subsystem_add_host", 00:25:12.396 "params": { 00:25:12.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.396 "host": "nqn.2016-06.io.spdk:host1", 00:25:12.396 "psk": "key0" 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_subsystem_add_ns", 00:25:12.396 "params": { 00:25:12.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.396 "namespace": { 00:25:12.396 "nsid": 1, 00:25:12.396 "bdev_name": "malloc0", 00:25:12.396 "nguid": "78AA35656E0D4F65BE23BC361A244131", 00:25:12.396 "uuid": "78aa3565-6e0d-4f65-be23-bc361a244131", 00:25:12.396 "no_auto_visible": false 00:25:12.396 } 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "nvmf_subsystem_add_listener", 00:25:12.396 "params": { 00:25:12.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.396 "listen_address": { 00:25:12.396 "trtype": "TCP", 00:25:12.396 "adrfam": "IPv4", 00:25:12.396 "traddr": "10.0.0.2", 00:25:12.396 "trsvcid": "4420" 00:25:12.396 }, 00:25:12.396 "secure_channel": true 00:25:12.396 } 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }' 00:25:12.396 14:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:12.396 14:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:12.396 "subsystems": [ 00:25:12.396 { 00:25:12.396 "subsystem": "keyring", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "keyring_file_add_key", 00:25:12.396 "params": { 00:25:12.396 "name": "key0", 00:25:12.396 "path": "/tmp/tmp.5huBr61b5e" 00:25:12.396 } 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "iobuf", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "iobuf_set_options", 00:25:12.396 "params": { 00:25:12.396 "small_pool_count": 8192, 00:25:12.396 "large_pool_count": 1024, 00:25:12.396 "small_bufsize": 8192, 00:25:12.396 "large_bufsize": 135168 00:25:12.396 } 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "sock", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "sock_set_default_impl", 00:25:12.396 "params": { 00:25:12.396 "impl_name": "posix" 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "sock_impl_set_options", 00:25:12.396 "params": { 00:25:12.396 "impl_name": "ssl", 00:25:12.396 "recv_buf_size": 4096, 00:25:12.396 "send_buf_size": 4096, 00:25:12.396 "enable_recv_pipe": true, 00:25:12.396 "enable_quickack": false, 00:25:12.396 "enable_placement_id": 0, 00:25:12.396 "enable_zerocopy_send_server": true, 00:25:12.396 "enable_zerocopy_send_client": false, 00:25:12.396 "zerocopy_threshold": 0, 00:25:12.396 "tls_version": 0, 00:25:12.396 "enable_ktls": false 00:25:12.396 } 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "method": "sock_impl_set_options", 00:25:12.396 "params": { 00:25:12.396 "impl_name": "posix", 00:25:12.396 "recv_buf_size": 2097152, 00:25:12.396 "send_buf_size": 2097152, 00:25:12.396 "enable_recv_pipe": true, 00:25:12.396 "enable_quickack": false, 00:25:12.396 "enable_placement_id": 0, 00:25:12.396 "enable_zerocopy_send_server": true, 00:25:12.396 "enable_zerocopy_send_client": false, 00:25:12.396 "zerocopy_threshold": 0, 00:25:12.396 "tls_version": 0, 00:25:12.396 "enable_ktls": false 00:25:12.396 } 00:25:12.396 } 00:25:12.396 ] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "vmd", 00:25:12.396 "config": [] 00:25:12.396 }, 00:25:12.396 { 00:25:12.396 "subsystem": "accel", 00:25:12.396 "config": [ 00:25:12.396 { 00:25:12.396 "method": "accel_set_options", 00:25:12.396 "params": { 00:25:12.396 "small_cache_size": 128, 00:25:12.396 "large_cache_size": 16, 00:25:12.396 "task_count": 2048, 00:25:12.396 "sequence_count": 2048, 00:25:12.396 "buf_count": 2048 00:25:12.397 } 00:25:12.397 } 00:25:12.397 ] 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "subsystem": "bdev", 00:25:12.397 "config": [ 00:25:12.397 { 00:25:12.397 "method": "bdev_set_options", 00:25:12.397 "params": { 00:25:12.397 "bdev_io_pool_size": 65535, 00:25:12.397 "bdev_io_cache_size": 256, 00:25:12.397 "bdev_auto_examine": true, 00:25:12.397 "iobuf_small_cache_size": 128, 00:25:12.397 "iobuf_large_cache_size": 16 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_raid_set_options", 00:25:12.397 "params": { 00:25:12.397 "process_window_size_kb": 1024 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_iscsi_set_options", 00:25:12.397 "params": { 00:25:12.397 "timeout_sec": 30 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_nvme_set_options", 00:25:12.397 "params": { 00:25:12.397 "action_on_timeout": "none", 00:25:12.397 "timeout_us": 0, 00:25:12.397 "timeout_admin_us": 0, 00:25:12.397 "keep_alive_timeout_ms": 10000, 00:25:12.397 "arbitration_burst": 0, 00:25:12.397 "low_priority_weight": 0, 00:25:12.397 "medium_priority_weight": 0, 00:25:12.397 "high_priority_weight": 0, 00:25:12.397 "nvme_adminq_poll_period_us": 10000, 00:25:12.397 "nvme_ioq_poll_period_us": 0, 00:25:12.397 "io_queue_requests": 512, 00:25:12.397 "delay_cmd_submit": true, 00:25:12.397 "transport_retry_count": 4, 00:25:12.397 "bdev_retry_count": 3, 00:25:12.397 "transport_ack_timeout": 0, 00:25:12.397 "ctrlr_loss_timeout_sec": 0, 00:25:12.397 "reconnect_delay_sec": 0, 00:25:12.397 "fast_io_fail_timeout_sec": 0, 00:25:12.397 "disable_auto_failback": false, 00:25:12.397 "generate_uuids": false, 00:25:12.397 "transport_tos": 0, 00:25:12.397 "nvme_error_stat": false, 00:25:12.397 "rdma_srq_size": 0, 00:25:12.397 "io_path_stat": false, 00:25:12.397 "allow_accel_sequence": false, 00:25:12.397 "rdma_max_cq_size": 0, 00:25:12.397 "rdma_cm_event_timeout_ms": 0, 00:25:12.397 "dhchap_digests": [ 00:25:12.397 "sha256", 00:25:12.397 "sha384", 00:25:12.397 "sha512" 00:25:12.397 ], 00:25:12.397 "dhchap_dhgroups": [ 00:25:12.397 "null", 00:25:12.397 "ffdhe2048", 00:25:12.397 "ffdhe3072", 00:25:12.397 "ffdhe4096", 00:25:12.397 "ffdhe6144", 00:25:12.397 "ffdhe8192" 00:25:12.397 ] 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_nvme_attach_controller", 00:25:12.397 "params": { 00:25:12.397 "name": "nvme0", 00:25:12.397 "trtype": "TCP", 00:25:12.397 "adrfam": "IPv4", 00:25:12.397 "traddr": "10.0.0.2", 00:25:12.397 "trsvcid": "4420", 00:25:12.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.397 "prchk_reftag": false, 00:25:12.397 "prchk_guard": false, 00:25:12.397 "ctrlr_loss_timeout_sec": 0, 00:25:12.397 "reconnect_delay_sec": 0, 00:25:12.397 "fast_io_fail_timeout_sec": 0, 00:25:12.397 "psk": "key0", 00:25:12.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:12.397 "hdgst": false, 00:25:12.397 "ddgst": false 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_nvme_set_hotplug", 00:25:12.397 "params": { 00:25:12.397 "period_us": 100000, 00:25:12.397 "enable": false 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_enable_histogram", 00:25:12.397 "params": { 00:25:12.397 "name": "nvme0n1", 00:25:12.397 "enable": true 00:25:12.397 } 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "method": "bdev_wait_for_examine" 00:25:12.397 } 00:25:12.397 ] 00:25:12.397 }, 00:25:12.397 { 00:25:12.397 "subsystem": "nbd", 00:25:12.397 "config": [] 00:25:12.397 } 00:25:12.397 ] 00:25:12.397 }' 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 604778 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 604778 ']' 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 604778 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:12.397 14:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 604778 00:25:12.397 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:12.397 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:12.397 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 604778' 00:25:12.397 killing process with pid 604778 00:25:12.397 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 604778 00:25:12.397 Received shutdown signal, test time was about 1.000000 seconds 00:25:12.397 00:25:12.397 Latency(us) 00:25:12.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.397 =================================================================================================================== 00:25:12.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.397 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 604778 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 604580 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 604580 ']' 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 604580 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 604580 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 604580' 00:25:12.657 killing process with pid 604580 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 604580 00:25:12.657 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 604580 00:25:12.918 14:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:12.918 14:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:12.918 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:12.918 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.918 14:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:12.918 "subsystems": [ 00:25:12.918 { 00:25:12.918 "subsystem": "keyring", 00:25:12.918 "config": [ 00:25:12.918 { 00:25:12.918 "method": "keyring_file_add_key", 00:25:12.918 "params": { 00:25:12.918 "name": "key0", 00:25:12.918 "path": "/tmp/tmp.5huBr61b5e" 00:25:12.918 } 00:25:12.918 } 00:25:12.918 ] 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "subsystem": "iobuf", 00:25:12.918 "config": [ 00:25:12.918 { 00:25:12.918 "method": "iobuf_set_options", 00:25:12.918 "params": { 00:25:12.918 "small_pool_count": 8192, 00:25:12.918 "large_pool_count": 1024, 00:25:12.918 "small_bufsize": 8192, 00:25:12.918 "large_bufsize": 135168 00:25:12.918 } 00:25:12.918 } 00:25:12.918 ] 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "subsystem": "sock", 00:25:12.918 "config": [ 00:25:12.918 { 00:25:12.918 "method": "sock_set_default_impl", 00:25:12.918 "params": { 00:25:12.918 "impl_name": "posix" 00:25:12.918 } 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "method": "sock_impl_set_options", 00:25:12.918 "params": { 00:25:12.918 "impl_name": "ssl", 00:25:12.918 "recv_buf_size": 4096, 00:25:12.918 "send_buf_size": 4096, 00:25:12.918 "enable_recv_pipe": true, 00:25:12.918 "enable_quickack": false, 00:25:12.918 "enable_placement_id": 0, 00:25:12.918 "enable_zerocopy_send_server": true, 00:25:12.918 "enable_zerocopy_send_client": false, 00:25:12.918 "zerocopy_threshold": 0, 00:25:12.918 "tls_version": 0, 00:25:12.918 "enable_ktls": false 00:25:12.918 } 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "method": "sock_impl_set_options", 00:25:12.918 "params": { 00:25:12.918 "impl_name": "posix", 00:25:12.918 "recv_buf_size": 2097152, 00:25:12.918 "send_buf_size": 2097152, 00:25:12.918 "enable_recv_pipe": true, 00:25:12.918 "enable_quickack": false, 00:25:12.918 "enable_placement_id": 0, 00:25:12.918 "enable_zerocopy_send_server": true, 00:25:12.918 "enable_zerocopy_send_client": false, 00:25:12.918 "zerocopy_threshold": 0, 00:25:12.918 "tls_version": 0, 00:25:12.918 "enable_ktls": false 00:25:12.918 } 00:25:12.918 } 00:25:12.918 ] 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "subsystem": "vmd", 00:25:12.918 "config": [] 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "subsystem": "accel", 00:25:12.918 "config": [ 00:25:12.918 { 00:25:12.918 "method": "accel_set_options", 00:25:12.918 "params": { 00:25:12.918 "small_cache_size": 128, 00:25:12.918 "large_cache_size": 16, 00:25:12.918 "task_count": 2048, 00:25:12.918 "sequence_count": 2048, 00:25:12.918 "buf_count": 2048 00:25:12.918 } 00:25:12.918 } 00:25:12.918 ] 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "subsystem": "bdev", 00:25:12.918 "config": [ 00:25:12.918 { 00:25:12.918 "method": "bdev_set_options", 00:25:12.918 "params": { 00:25:12.918 "bdev_io_pool_size": 65535, 00:25:12.918 "bdev_io_cache_size": 256, 00:25:12.918 "bdev_auto_examine": true, 00:25:12.918 "iobuf_small_cache_size": 128, 00:25:12.918 "iobuf_large_cache_size": 16 00:25:12.918 } 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "method": "bdev_raid_set_options", 00:25:12.918 "params": { 00:25:12.918 "process_window_size_kb": 1024 00:25:12.918 } 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "method": "bdev_iscsi_set_options", 00:25:12.918 "params": { 00:25:12.918 "timeout_sec": 30 00:25:12.918 } 00:25:12.918 }, 00:25:12.918 { 00:25:12.918 "method": "bdev_nvme_set_options", 00:25:12.918 "params": { 00:25:12.918 "action_on_timeout": "none", 00:25:12.918 "timeout_us": 0, 00:25:12.918 "timeout_admin_us": 0, 00:25:12.918 "keep_alive_timeout_ms": 10000, 00:25:12.918 "arbitration_burst": 0, 00:25:12.918 "low_priority_weight": 0, 00:25:12.918 "medium_priority_weight": 0, 00:25:12.918 "high_priority_weight": 0, 00:25:12.918 "nvme_adminq_poll_period_us": 10000, 00:25:12.918 "nvme_ioq_poll_period_us": 0, 00:25:12.918 "io_queue_requests": 0, 00:25:12.919 "delay_cmd_submit": true, 00:25:12.919 "transport_retry_count": 4, 00:25:12.919 "bdev_retry_count": 3, 00:25:12.919 "transport_ack_timeout": 0, 00:25:12.919 "ctrlr_loss_timeout_sec": 0, 00:25:12.919 "reconnect_delay_sec": 0, 00:25:12.919 "fast_io_fail_timeout_sec": 0, 00:25:12.919 "disable_auto_failback": false, 00:25:12.919 "generate_uuids": false, 00:25:12.919 "transport_tos": 0, 00:25:12.919 "nvme_error_stat": false, 00:25:12.919 "rdma_srq_size": 0, 00:25:12.919 "io_path_stat": false, 00:25:12.919 "allow_accel_sequence": false, 00:25:12.919 "rdma_max_cq_size": 0, 00:25:12.919 "rdma_cm_event_timeout_ms": 0, 00:25:12.919 "dhchap_digests": [ 00:25:12.919 "sha256", 00:25:12.919 "sha384", 00:25:12.919 "sha512" 00:25:12.919 ], 00:25:12.919 "dhchap_dhgroups": [ 00:25:12.919 "null", 00:25:12.919 "ffdhe2048", 00:25:12.919 "ffdhe3072", 00:25:12.919 "ffdhe4096", 00:25:12.919 "ffdhe6144", 00:25:12.919 "ffdhe8192" 00:25:12.919 ] 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "bdev_nvme_set_hotplug", 00:25:12.919 "params": { 00:25:12.919 "period_us": 100000, 00:25:12.919 "enable": false 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "bdev_malloc_create", 00:25:12.919 "params": { 00:25:12.919 "name": "malloc0", 00:25:12.919 "num_blocks": 8192, 00:25:12.919 "block_size": 4096, 00:25:12.919 "physical_block_size": 4096, 00:25:12.919 "uuid": "78aa3565-6e0d-4f65-be23-bc361a244131", 00:25:12.919 "optimal_io_boundary": 0 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "bdev_wait_for_examine" 00:25:12.919 } 00:25:12.919 ] 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "subsystem": "nbd", 00:25:12.919 "config": [] 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "subsystem": "scheduler", 00:25:12.919 "config": [ 00:25:12.919 { 00:25:12.919 "method": "framework_set_scheduler", 00:25:12.919 "params": { 00:25:12.919 "name": "static" 00:25:12.919 } 00:25:12.919 } 00:25:12.919 ] 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "subsystem": "nvmf", 00:25:12.919 "config": [ 00:25:12.919 { 00:25:12.919 "method": "nvmf_set_config", 00:25:12.919 "params": { 00:25:12.919 "discovery_filter": "match_any", 00:25:12.919 "admin_cmd_passthru": { 00:25:12.919 "identify_ctrlr": false 00:25:12.919 } 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_set_max_subsystems", 00:25:12.919 "params": { 00:25:12.919 "max_subsystems": 1024 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_set_crdt", 00:25:12.919 "params": { 00:25:12.919 "crdt1": 0, 00:25:12.919 "crdt2": 0, 00:25:12.919 "crdt3": 0 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_create_transport", 00:25:12.919 "params": { 00:25:12.919 "trtype": "TCP", 00:25:12.919 "max_queue_depth": 128, 00:25:12.919 "max_io_qpairs_per_ctrlr": 127, 00:25:12.919 "in_capsule_data_size": 4096, 00:25:12.919 "max_io_size": 131072, 00:25:12.919 "io_unit_size": 131072, 00:25:12.919 "max_aq_depth": 128, 00:25:12.919 "num_shared_buffers": 511, 00:25:12.919 "buf_cache_size": 4294967295, 00:25:12.919 "dif_insert_or_strip": false, 00:25:12.919 "zcopy": false, 00:25:12.919 "c2h_success": false, 00:25:12.919 "sock_priority": 0, 00:25:12.919 "abort_timeout_sec": 1, 00:25:12.919 "ack_timeout": 0, 00:25:12.919 "data_wr_pool_size": 0 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_create_subsystem", 00:25:12.919 "params": { 00:25:12.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.919 "allow_any_host": false, 00:25:12.919 "serial_number": "00000000000000000000", 00:25:12.919 "model_number": "SPDK bdev Controller", 00:25:12.919 "max_namespaces": 32, 00:25:12.919 "min_cntlid": 1, 00:25:12.919 "max_cntlid": 65519, 00:25:12.919 "ana_reporting": false 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_subsystem_add_host", 00:25:12.919 "params": { 00:25:12.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.919 "host": "nqn.2016-06.io.spdk:host1", 00:25:12.919 "psk": "key0" 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_subsystem_add_ns", 00:25:12.919 "params": { 00:25:12.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.919 "namespace": { 00:25:12.919 "nsid": 1, 00:25:12.919 "bdev_name": "malloc0", 00:25:12.919 "nguid": "78AA35656E0D4F65BE23BC361A244131", 00:25:12.919 "uuid": "78aa3565-6e0d-4f65-be23-bc361a244131", 00:25:12.919 "no_auto_visible": false 00:25:12.919 } 00:25:12.919 } 00:25:12.919 }, 00:25:12.919 { 00:25:12.919 "method": "nvmf_subsystem_add_listener", 00:25:12.919 "params": { 00:25:12.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.919 "listen_address": { 00:25:12.919 "trtype": "TCP", 00:25:12.919 "adrfam": "IPv4", 00:25:12.919 "traddr": "10.0.0.2", 00:25:12.919 "trsvcid": "4420" 00:25:12.919 }, 00:25:12.920 "secure_channel": true 00:25:12.920 } 00:25:12.920 } 00:25:12.920 ] 00:25:12.920 } 00:25:12.920 ] 00:25:12.920 }' 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=605294 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 605294 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 605294 ']' 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:12.920 14:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.920 [2024-06-07 14:27:36.368654] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:12.920 [2024-06-07 14:27:36.368709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.920 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.920 [2024-06-07 14:27:36.442399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.920 [2024-06-07 14:27:36.474391] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.920 [2024-06-07 14:27:36.474432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.920 [2024-06-07 14:27:36.474440] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.920 [2024-06-07 14:27:36.474448] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.920 [2024-06-07 14:27:36.474454] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.920 [2024-06-07 14:27:36.474512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.180 [2024-06-07 14:27:36.665122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.180 [2024-06-07 14:27:36.697125] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.180 [2024-06-07 14:27:36.709501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.751 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=605642 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 605642 /var/tmp/bdevperf.sock 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 605642 ']' 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.752 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:13.752 "subsystems": [ 00:25:13.752 { 00:25:13.752 "subsystem": "keyring", 00:25:13.752 "config": [ 00:25:13.752 { 00:25:13.752 "method": "keyring_file_add_key", 00:25:13.752 "params": { 00:25:13.752 "name": "key0", 00:25:13.752 "path": "/tmp/tmp.5huBr61b5e" 00:25:13.752 } 00:25:13.752 } 00:25:13.752 ] 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "subsystem": "iobuf", 00:25:13.752 "config": [ 00:25:13.752 { 00:25:13.752 "method": "iobuf_set_options", 00:25:13.752 "params": { 00:25:13.752 "small_pool_count": 8192, 00:25:13.752 "large_pool_count": 1024, 00:25:13.752 "small_bufsize": 8192, 00:25:13.752 "large_bufsize": 135168 00:25:13.752 } 00:25:13.752 } 00:25:13.752 ] 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "subsystem": "sock", 00:25:13.752 "config": [ 00:25:13.752 { 00:25:13.752 "method": "sock_set_default_impl", 00:25:13.752 "params": { 00:25:13.752 "impl_name": "posix" 00:25:13.752 } 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "method": "sock_impl_set_options", 00:25:13.752 "params": { 00:25:13.752 "impl_name": "ssl", 00:25:13.752 "recv_buf_size": 4096, 00:25:13.752 "send_buf_size": 4096, 00:25:13.752 "enable_recv_pipe": true, 00:25:13.752 "enable_quickack": false, 00:25:13.752 "enable_placement_id": 0, 00:25:13.752 "enable_zerocopy_send_server": true, 00:25:13.752 "enable_zerocopy_send_client": false, 00:25:13.752 "zerocopy_threshold": 0, 00:25:13.752 "tls_version": 0, 00:25:13.752 "enable_ktls": false 00:25:13.752 } 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "method": "sock_impl_set_options", 00:25:13.752 "params": { 00:25:13.752 "impl_name": "posix", 00:25:13.752 "recv_buf_size": 2097152, 00:25:13.752 "send_buf_size": 2097152, 00:25:13.752 "enable_recv_pipe": true, 00:25:13.752 "enable_quickack": false, 00:25:13.752 "enable_placement_id": 0, 00:25:13.752 "enable_zerocopy_send_server": true, 00:25:13.752 "enable_zerocopy_send_client": false, 00:25:13.752 "zerocopy_threshold": 0, 00:25:13.752 "tls_version": 0, 00:25:13.752 "enable_ktls": false 00:25:13.752 } 00:25:13.752 } 00:25:13.752 ] 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "subsystem": "vmd", 00:25:13.752 "config": [] 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "subsystem": "accel", 00:25:13.752 "config": [ 00:25:13.752 { 00:25:13.752 "method": "accel_set_options", 00:25:13.752 "params": { 00:25:13.752 "small_cache_size": 128, 00:25:13.752 "large_cache_size": 16, 00:25:13.752 "task_count": 2048, 00:25:13.752 "sequence_count": 2048, 00:25:13.752 "buf_count": 2048 00:25:13.752 } 00:25:13.752 } 00:25:13.752 ] 00:25:13.752 }, 00:25:13.752 { 00:25:13.752 "subsystem": "bdev", 00:25:13.752 "config": [ 00:25:13.752 { 00:25:13.753 "method": "bdev_set_options", 00:25:13.753 "params": { 00:25:13.753 "bdev_io_pool_size": 65535, 00:25:13.753 "bdev_io_cache_size": 256, 00:25:13.753 "bdev_auto_examine": true, 00:25:13.753 "iobuf_small_cache_size": 128, 00:25:13.753 "iobuf_large_cache_size": 16 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_raid_set_options", 00:25:13.753 "params": { 00:25:13.753 "process_window_size_kb": 1024 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_iscsi_set_options", 00:25:13.753 "params": { 00:25:13.753 "timeout_sec": 30 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_nvme_set_options", 00:25:13.753 "params": { 00:25:13.753 "action_on_timeout": "none", 00:25:13.753 "timeout_us": 0, 00:25:13.753 "timeout_admin_us": 0, 00:25:13.753 "keep_alive_timeout_ms": 10000, 00:25:13.753 "arbitration_burst": 0, 00:25:13.753 "low_priority_weight": 0, 00:25:13.753 "medium_priority_weight": 0, 00:25:13.753 "high_priority_weight": 0, 00:25:13.753 "nvme_adminq_poll_period_us": 10000, 00:25:13.753 "nvme_ioq_poll_period_us": 0, 00:25:13.753 "io_queue_requests": 512, 00:25:13.753 "delay_cmd_submit": true, 00:25:13.753 "transport_retry_count": 4, 00:25:13.753 "bdev_retry_count": 3, 00:25:13.753 "transport_ack_timeout": 0, 00:25:13.753 "ctrlr_loss_timeout_sec": 0, 00:25:13.753 "reconnect_delay_sec": 0, 00:25:13.753 "fast_io_fail_timeout_sec": 0, 00:25:13.753 "disable_auto_failback": false, 00:25:13.753 "generate_uuids": false, 00:25:13.753 "transport_tos": 0, 00:25:13.753 "nvme_error_stat": false, 00:25:13.753 "rdma_srq_size": 0, 00:25:13.753 "io_path_stat": false, 00:25:13.753 "allow_accel_sequence": false, 00:25:13.753 "rdma_max_cq_size": 0, 00:25:13.753 "rdma_cm_event_timeout_ms": 0, 00:25:13.753 "dhchap_digests": [ 00:25:13.753 "sha256", 00:25:13.753 "sha384", 00:25:13.753 "sha512" 00:25:13.753 ], 00:25:13.753 "dhchap_dhgroups": [ 00:25:13.753 "null", 00:25:13.753 "ffdhe2048", 00:25:13.753 "ffdhe3072", 00:25:13.753 "ffdhe4096", 00:25:13.753 "ffdhe6144", 00:25:13.753 "ffdhe8192" 00:25:13.753 ] 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_nvme_attach_controller", 00:25:13.753 "params": { 00:25:13.753 "name": "nvme0", 00:25:13.753 "trtype": "TCP", 00:25:13.753 "adrfam": "IPv4", 00:25:13.753 "traddr": "10.0.0.2", 00:25:13.753 "trsvcid": "4420", 00:25:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.753 "prchk_reftag": false, 00:25:13.753 "prchk_guard": false, 00:25:13.753 "ctrlr_loss_timeout_sec": 0, 00:25:13.753 "reconnect_delay_sec": 0, 00:25:13.753 "fast_io_fail_timeout_sec": 0, 00:25:13.753 "psk": "key0", 00:25:13.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:13.753 "hdgst": false, 00:25:13.753 "ddgst": false 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_nvme_set_hotplug", 00:25:13.753 "params": { 00:25:13.753 "period_us": 100000, 00:25:13.753 "enable": false 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_enable_histogram", 00:25:13.753 "params": { 00:25:13.753 "name": "nvme0n1", 00:25:13.753 "enable": true 00:25:13.753 } 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "method": "bdev_wait_for_examine" 00:25:13.753 } 00:25:13.753 ] 00:25:13.753 }, 00:25:13.753 { 00:25:13.753 "subsystem": "nbd", 00:25:13.753 "config": [] 00:25:13.753 } 00:25:13.753 ] 00:25:13.753 }' 00:25:13.753 [2024-06-07 14:27:37.204563] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:13.754 [2024-06-07 14:27:37.204619] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605642 ] 00:25:13.754 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.754 [2024-06-07 14:27:37.283076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.754 [2024-06-07 14:27:37.311454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.017 [2024-06-07 14:27:37.438899] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:14.587 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:14.587 14:27:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:25:14.587 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:14.587 14:27:37 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:14.587 14:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.587 14:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:14.587 Running I/O for 1 seconds... 00:25:15.971 00:25:15.971 Latency(us) 00:25:15.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.971 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:15.971 Verification LBA range: start 0x0 length 0x2000 00:25:15.971 nvme0n1 : 1.05 5844.18 22.83 0.00 0.00 21381.20 7536.64 49370.45 00:25:15.971 =================================================================================================================== 00:25:15.971 Total : 5844.18 22.83 0.00 0.00 21381.20 7536.64 49370.45 00:25:15.971 0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:15.971 nvmf_trace.0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 605642 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 605642 ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 605642 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 605642 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 605642' 00:25:15.971 killing process with pid 605642 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 605642 00:25:15.971 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.971 00:25:15.971 Latency(us) 00:25:15.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.971 =================================================================================================================== 00:25:15.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 605642 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.971 rmmod nvme_tcp 00:25:15.971 rmmod nvme_fabrics 00:25:15.971 rmmod nvme_keyring 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 605294 ']' 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 605294 00:25:15.971 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 605294 ']' 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 605294 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 605294 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 605294' 00:25:16.233 killing process with pid 605294 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 605294 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 605294 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.233 14:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.778 14:27:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:18.778 14:27:41 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.LH1mVywmlF /tmp/tmp.r6A8ABPUOD /tmp/tmp.5huBr61b5e 00:25:18.778 00:25:18.778 real 1m19.704s 00:25:18.778 user 1m59.832s 00:25:18.778 sys 0m26.205s 00:25:18.778 14:27:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:18.778 14:27:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.778 ************************************ 00:25:18.778 END TEST nvmf_tls 00:25:18.778 ************************************ 00:25:18.778 14:27:41 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:18.778 14:27:41 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:18.778 14:27:41 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:18.778 14:27:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.778 ************************************ 00:25:18.778 START TEST nvmf_fips 00:25:18.778 ************************************ 00:25:18.778 14:27:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:18.778 * Looking for test storage... 00:25:18.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:18.778 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:25:18.779 Error setting digest 00:25:18.779 00523507E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:18.779 00523507E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:18.779 14:27:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.919 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:26.920 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:26.920 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:26.920 Found net devices under 0000:31:00.0: cvl_0_0 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:26.920 Found net devices under 0000:31:00.1: cvl_0_1 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:25:26.920 00:25:26.920 --- 10.0.0.2 ping statistics --- 00:25:26.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.920 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:25:26.920 00:25:26.920 --- 10.0.0.1 ping statistics --- 00:25:26.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.920 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=610698 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 610698 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 610698 ']' 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:26.920 14:27:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.180 [2024-06-07 14:27:50.576421] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:27.180 [2024-06-07 14:27:50.576495] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.180 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.180 [2024-06-07 14:27:50.670183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.180 [2024-06-07 14:27:50.716374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.180 [2024-06-07 14:27:50.716423] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.180 [2024-06-07 14:27:50.716431] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.180 [2024-06-07 14:27:50.716437] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.180 [2024-06-07 14:27:50.716443] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.180 [2024-06-07 14:27:50.716475] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:27.750 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:28.010 [2024-06-07 14:27:51.542917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.010 [2024-06-07 14:27:51.558906] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.010 [2024-06-07 14:27:51.559139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.010 [2024-06-07 14:27:51.589024] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:28.010 malloc0 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=611045 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 611045 /var/tmp/bdevperf.sock 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 611045 ']' 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:28.010 14:27:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.270 [2024-06-07 14:27:51.680956] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:25:28.270 [2024-06-07 14:27:51.681031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611045 ] 00:25:28.270 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.270 [2024-06-07 14:27:51.743998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.270 [2024-06-07 14:27:51.780388] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.928 14:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:28.928 14:27:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:25:28.928 14:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:29.189 [2024-06-07 14:27:52.587872] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.189 [2024-06-07 14:27:52.587938] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:29.189 TLSTESTn1 00:25:29.189 14:27:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.189 Running I/O for 10 seconds... 00:25:39.182 00:25:39.182 Latency(us) 00:25:39.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.182 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:39.182 Verification LBA range: start 0x0 length 0x2000 00:25:39.182 TLSTESTn1 : 10.03 5346.91 20.89 0.00 0.00 23895.50 5106.35 51991.89 00:25:39.182 =================================================================================================================== 00:25:39.182 Total : 5346.91 20.89 0.00 0.00 23895.50 5106.35 51991.89 00:25:39.182 0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:39.443 nvmf_trace.0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 611045 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 611045 ']' 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 611045 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 611045 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 611045' 00:25:39.443 killing process with pid 611045 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 611045 00:25:39.443 Received shutdown signal, test time was about 10.000000 seconds 00:25:39.443 00:25:39.443 Latency(us) 00:25:39.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.443 =================================================================================================================== 00:25:39.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.443 [2024-06-07 14:28:02.981870] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:39.443 14:28:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 611045 00:25:39.443 14:28:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:39.443 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.443 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:39.443 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.443 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.703 rmmod nvme_tcp 00:25:39.703 rmmod nvme_fabrics 00:25:39.703 rmmod nvme_keyring 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 610698 ']' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 610698 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 610698 ']' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 610698 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 610698 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 610698' 00:25:39.703 killing process with pid 610698 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 610698 00:25:39.703 [2024-06-07 14:28:03.220739] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 610698 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.703 14:28:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:42.246 00:25:42.246 real 0m23.460s 00:25:42.246 user 0m24.053s 00:25:42.246 sys 0m10.079s 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:42.246 ************************************ 00:25:42.246 END TEST nvmf_fips 00:25:42.246 ************************************ 00:25:42.246 14:28:05 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:42.246 14:28:05 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.246 14:28:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:25:42.246 14:28:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:25:42.246 14:28:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.246 ************************************ 00:25:42.246 START TEST nvmf_fuzz 00:25:42.246 ************************************ 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:42.246 * Looking for test storage... 00:25:42.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.246 14:28:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.247 14:28:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:50.387 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:50.387 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:50.387 Found net devices under 0000:31:00.0: cvl_0_0 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:50.387 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:50.388 Found net devices under 0000:31:00.1: cvl_0_1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:50.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:25:50.388 00:25:50.388 --- 10.0.0.2 ping statistics --- 00:25:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.388 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:25:50.388 00:25:50.388 --- 10.0.0.1 ping statistics --- 00:25:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.388 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=617840 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 617840 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 617840 ']' 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:25:50.388 14:28:13 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 Malloc0 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:51.330 14:28:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:23.444 Fuzzing completed. Shutting down the fuzz application 00:26:23.444 00:26:23.444 Dumping successful admin opcodes: 00:26:23.444 8, 9, 10, 24, 00:26:23.444 Dumping successful io opcodes: 00:26:23.444 0, 9, 00:26:23.444 NS: 0x200003aeff00 I/O qp, Total commands completed: 924281, total successful commands: 5381, random_seed: 4174534272 00:26:23.444 NS: 0x200003aeff00 admin qp, Total commands completed: 116549, total successful commands: 951, random_seed: 1675881792 00:26:23.444 14:28:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:23.444 Fuzzing completed. Shutting down the fuzz application 00:26:23.444 00:26:23.444 Dumping successful admin opcodes: 00:26:23.444 24, 00:26:23.444 Dumping successful io opcodes: 00:26:23.444 00:26:23.444 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 801244333 00:26:23.444 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 801316017 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.444 rmmod nvme_tcp 00:26:23.444 rmmod nvme_fabrics 00:26:23.444 rmmod nvme_keyring 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 617840 ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 617840 ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 617840' 00:26:23.444 killing process with pid 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 617840 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.444 14:28:46 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.360 14:28:48 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:25.360 14:28:48 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:25.360 00:26:25.360 real 0m43.472s 00:26:25.360 user 0m56.874s 00:26:25.360 sys 0m15.958s 00:26:25.360 14:28:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:25.360 14:28:48 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:25.360 ************************************ 00:26:25.360 END TEST nvmf_fuzz 00:26:25.360 ************************************ 00:26:25.622 14:28:49 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.622 14:28:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:26:25.622 14:28:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:25.622 14:28:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.622 ************************************ 00:26:25.622 START TEST nvmf_multiconnection 00:26:25.622 ************************************ 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:25.622 * Looking for test storage... 00:26:25.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.622 14:28:49 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.623 14:28:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.805 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:33.806 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:33.806 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:33.806 Found net devices under 0000:31:00.0: cvl_0_0 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:33.806 Found net devices under 0000:31:00.1: cvl_0_1 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:33.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:26:33.806 00:26:33.806 --- 10.0.0.2 ping statistics --- 00:26:33.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.806 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:26:33.806 00:26:33.806 --- 10.0.0.1 ping statistics --- 00:26:33.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.806 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=628762 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 628762 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 628762 ']' 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:33.806 14:28:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.806 [2024-06-07 14:28:57.441160] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:26:33.806 [2024-06-07 14:28:57.441231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.068 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.068 [2024-06-07 14:28:57.519355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.068 [2024-06-07 14:28:57.561044] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.068 [2024-06-07 14:28:57.561086] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.068 [2024-06-07 14:28:57.561094] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.068 [2024-06-07 14:28:57.561101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.068 [2024-06-07 14:28:57.561107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.068 [2024-06-07 14:28:57.561264] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.068 [2024-06-07 14:28:57.561453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.068 [2024-06-07 14:28:57.561454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.068 [2024-06-07 14:28:57.561308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.639 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.639 [2024-06-07 14:28:58.261835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.640 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 Malloc1 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 [2024-06-07 14:28:58.329073] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 Malloc2 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 Malloc3 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 Malloc4 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:34.901 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 Malloc5 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.902 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.163 Malloc6 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 Malloc7 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 Malloc8 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 Malloc9 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 Malloc10 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.164 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.425 Malloc11 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.425 14:28:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:36.811 14:29:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:36.811 14:29:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:36.811 14:29:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.811 14:29:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:36.811 14:29:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.357 14:29:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:40.298 14:29:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:40.298 14:29:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:40.298 14:29:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.298 14:29:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:40.298 14:29:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.845 14:29:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:43.786 14:29:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:43.786 14:29:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:43.786 14:29:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.786 14:29:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:43.786 14:29:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.332 14:29:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:47.718 14:29:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:47.718 14:29:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:47.718 14:29:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.718 14:29:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:47.718 14:29:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:49.643 14:29:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:51.564 14:29:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:51.564 14:29:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:51.564 14:29:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.564 14:29:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:51.564 14:29:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.478 14:29:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:55.419 14:29:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:55.419 14:29:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:55.419 14:29:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:55.419 14:29:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:55.419 14:29:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:57.330 14:29:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:58.714 14:29:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:58.714 14:29:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:26:58.714 14:29:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:26:58.714 14:29:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:26:58.714 14:29:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:01.257 14:29:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:02.644 14:29:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:02.644 14:29:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:02.644 14:29:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:02.644 14:29:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:02.644 14:29:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:04.557 14:29:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:06.469 14:29:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:06.469 14:29:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:06.469 14:29:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:06.469 14:29:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:06.469 14:29:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.380 14:29:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:10.291 14:29:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:10.291 14:29:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:10.291 14:29:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:10.291 14:29:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:10.291 14:29:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.202 14:29:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:14.112 14:29:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:14.112 14:29:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:27:14.112 14:29:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:14.112 14:29:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:14.112 14:29:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:27:16.021 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:16.021 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:16.021 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:27:16.021 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:16.021 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:16.022 14:29:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:27:16.022 14:29:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:16.022 [global] 00:27:16.022 thread=1 00:27:16.022 invalidate=1 00:27:16.022 rw=read 00:27:16.022 time_based=1 00:27:16.022 runtime=10 00:27:16.022 ioengine=libaio 00:27:16.022 direct=1 00:27:16.022 bs=262144 00:27:16.022 iodepth=64 00:27:16.022 norandommap=1 00:27:16.022 numjobs=1 00:27:16.022 00:27:16.022 [job0] 00:27:16.022 filename=/dev/nvme0n1 00:27:16.022 [job1] 00:27:16.022 filename=/dev/nvme10n1 00:27:16.022 [job2] 00:27:16.022 filename=/dev/nvme1n1 00:27:16.022 [job3] 00:27:16.022 filename=/dev/nvme2n1 00:27:16.022 [job4] 00:27:16.022 filename=/dev/nvme3n1 00:27:16.022 [job5] 00:27:16.022 filename=/dev/nvme4n1 00:27:16.022 [job6] 00:27:16.022 filename=/dev/nvme5n1 00:27:16.022 [job7] 00:27:16.022 filename=/dev/nvme6n1 00:27:16.022 [job8] 00:27:16.022 filename=/dev/nvme7n1 00:27:16.022 [job9] 00:27:16.022 filename=/dev/nvme8n1 00:27:16.022 [job10] 00:27:16.022 filename=/dev/nvme9n1 00:27:16.282 Could not set queue depth (nvme0n1) 00:27:16.282 Could not set queue depth (nvme10n1) 00:27:16.282 Could not set queue depth (nvme1n1) 00:27:16.282 Could not set queue depth (nvme2n1) 00:27:16.282 Could not set queue depth (nvme3n1) 00:27:16.282 Could not set queue depth (nvme4n1) 00:27:16.282 Could not set queue depth (nvme5n1) 00:27:16.282 Could not set queue depth (nvme6n1) 00:27:16.282 Could not set queue depth (nvme7n1) 00:27:16.282 Could not set queue depth (nvme8n1) 00:27:16.282 Could not set queue depth (nvme9n1) 00:27:16.542 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:16.542 fio-3.35 00:27:16.542 Starting 11 threads 00:27:28.819 00:27:28.819 job0: (groupid=0, jobs=1): err= 0: pid=637297: Fri Jun 7 14:29:50 2024 00:27:28.819 read: IOPS=829, BW=207MiB/s (217MB/s)(2095MiB/10103msec) 00:27:28.819 slat (usec): min=5, max=51983, avg=1007.40, stdev=3044.32 00:27:28.819 clat (msec): min=2, max=185, avg=76.06, stdev=28.63 00:27:28.819 lat (msec): min=2, max=185, avg=77.06, stdev=29.10 00:27:28.819 clat percentiles (msec): 00:27:28.819 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 54], 00:27:28.819 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 85], 00:27:28.819 | 70.00th=[ 93], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 116], 00:27:28.819 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 178], 99.95th=[ 186], 00:27:28.819 | 99.99th=[ 186] 00:27:28.819 bw ( KiB/s): min=140288, max=311296, per=8.79%, avg=212889.60, stdev=57177.89, samples=20 00:27:28.819 iops : min= 548, max= 1216, avg=831.60, stdev=223.35, samples=20 00:27:28.819 lat (msec) : 4=0.12%, 10=1.46%, 20=2.35%, 50=13.67%, 100=59.53% 00:27:28.819 lat (msec) : 250=22.88% 00:27:28.819 cpu : usr=0.28%, sys=2.54%, ctx=1961, majf=0, minf=4097 00:27:28.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.819 issued rwts: total=8379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.819 job1: (groupid=0, jobs=1): err= 0: pid=637318: Fri Jun 7 14:29:50 2024 00:27:28.819 read: IOPS=1070, BW=268MiB/s (281MB/s)(2682MiB/10018msec) 00:27:28.819 slat (usec): min=5, max=38953, avg=821.24, stdev=2487.41 00:27:28.819 clat (usec): min=1597, max=166017, avg=58870.61, stdev=30625.43 00:27:28.819 lat (usec): min=1657, max=172129, avg=59691.84, stdev=31048.17 00:27:28.819 clat percentiles (msec): 00:27:28.819 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 24], 20.00th=[ 32], 00:27:28.819 | 30.00th=[ 42], 40.00th=[ 48], 50.00th=[ 54], 60.00th=[ 59], 00:27:28.819 | 70.00th=[ 73], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 117], 00:27:28.819 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 155], 00:27:28.819 | 99.99th=[ 157] 00:27:28.819 bw ( KiB/s): min=136704, max=478208, per=11.26%, avg=272972.80, stdev=113677.02, samples=20 00:27:28.819 iops : min= 534, max= 1868, avg=1066.30, stdev=444.05, samples=20 00:27:28.819 lat (msec) : 2=0.08%, 4=0.93%, 10=2.38%, 20=3.65%, 50=36.97% 00:27:28.819 lat (msec) : 100=43.95%, 250=12.04% 00:27:28.819 cpu : usr=0.33%, sys=3.03%, ctx=2275, majf=0, minf=4097 00:27:28.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:27:28.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.819 issued rwts: total=10726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.819 job2: (groupid=0, jobs=1): err= 0: pid=637338: Fri Jun 7 14:29:50 2024 00:27:28.819 read: IOPS=732, BW=183MiB/s (192MB/s)(1844MiB/10076msec) 00:27:28.819 slat (usec): min=8, max=37086, avg=1351.77, stdev=3475.73 00:27:28.819 clat (msec): min=16, max=155, avg=85.93, stdev=19.70 00:27:28.819 lat (msec): min=16, max=155, avg=87.28, stdev=20.00 00:27:28.819 clat percentiles (msec): 00:27:28.819 | 1.00th=[ 45], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 68], 00:27:28.819 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 92], 00:27:28.819 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 112], 95.00th=[ 116], 00:27:28.819 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 153], 00:27:28.819 | 99.99th=[ 157] 00:27:28.819 bw ( KiB/s): min=144384, max=257536, per=7.73%, avg=187238.40, stdev=33959.84, samples=20 00:27:28.819 iops : min= 564, max= 1006, avg=731.40, stdev=132.66, samples=20 00:27:28.819 lat (msec) : 20=0.11%, 50=2.06%, 100=71.19%, 250=26.64% 00:27:28.819 cpu : usr=0.42%, sys=3.02%, ctx=1538, majf=0, minf=4097 00:27:28.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:27:28.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.819 issued rwts: total=7377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.819 job3: (groupid=0, jobs=1): err= 0: pid=637349: Fri Jun 7 14:29:50 2024 00:27:28.819 read: IOPS=796, BW=199MiB/s (209MB/s)(2007MiB/10072msec) 00:27:28.819 slat (usec): min=5, max=83813, avg=1165.49, stdev=3536.03 00:27:28.819 clat (usec): min=1365, max=204715, avg=79015.21, stdev=31450.85 00:27:28.819 lat (usec): min=1414, max=204764, avg=80180.70, stdev=31958.89 00:27:28.819 clat percentiles (msec): 00:27:28.819 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 54], 00:27:28.819 | 30.00th=[ 67], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 91], 00:27:28.819 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 125], 00:27:28.819 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:27:28.819 | 99.99th=[ 205] 00:27:28.819 bw ( KiB/s): min=137216, max=376320, per=8.41%, avg=203852.80, stdev=60381.80, samples=20 00:27:28.819 iops : min= 536, max= 1470, avg=796.30, stdev=235.87, samples=20 00:27:28.819 lat (msec) : 2=0.05%, 4=0.32%, 10=1.11%, 20=4.70%, 50=11.85% 00:27:28.819 lat (msec) : 100=54.91%, 250=27.06% 00:27:28.819 cpu : usr=0.35%, sys=2.58%, ctx=1769, majf=0, minf=4097 00:27:28.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.819 issued rwts: total=8026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.819 job4: (groupid=0, jobs=1): err= 0: pid=637356: Fri Jun 7 14:29:50 2024 00:27:28.819 read: IOPS=938, BW=235MiB/s (246MB/s)(2350MiB/10017msec) 00:27:28.819 slat (usec): min=5, max=89285, avg=878.27, stdev=3478.47 00:27:28.819 clat (msec): min=2, max=162, avg=67.27, stdev=34.71 00:27:28.819 lat (msec): min=2, max=190, avg=68.14, stdev=35.22 00:27:28.819 clat percentiles (msec): 00:27:28.819 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 31], 00:27:28.819 | 30.00th=[ 42], 40.00th=[ 56], 50.00th=[ 68], 60.00th=[ 79], 00:27:28.819 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 115], 95.00th=[ 123], 00:27:28.819 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:28.819 | 99.99th=[ 163] 00:27:28.819 bw ( KiB/s): min=134656, max=394240, per=9.86%, avg=238976.00, stdev=73571.74, samples=20 00:27:28.819 iops : min= 526, max= 1540, avg=933.50, stdev=287.39, samples=20 00:27:28.819 lat (msec) : 4=0.19%, 10=2.64%, 20=6.13%, 50=26.03%, 100=45.40% 00:27:28.819 lat (msec) : 250=19.61% 00:27:28.819 cpu : usr=0.36%, sys=2.69%, ctx=2166, majf=0, minf=4097 00:27:28.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:28.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.819 issued rwts: total=9398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job5: (groupid=0, jobs=1): err= 0: pid=637368: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=749, BW=187MiB/s (196MB/s)(1883MiB/10054msec) 00:27:28.820 slat (usec): min=7, max=49934, avg=1322.48, stdev=3358.75 00:27:28.820 clat (msec): min=21, max=178, avg=84.04, stdev=21.82 00:27:28.820 lat (msec): min=22, max=178, avg=85.36, stdev=22.16 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 63], 00:27:28.820 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 88], 00:27:28.820 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 125], 00:27:28.820 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 165], 00:27:28.820 | 99.99th=[ 180] 00:27:28.820 bw ( KiB/s): min=134656, max=269824, per=7.89%, avg=191180.80, stdev=40447.97, samples=20 00:27:28.820 iops : min= 526, max= 1054, avg=746.80, stdev=158.00, samples=20 00:27:28.820 lat (msec) : 50=1.70%, 100=74.98%, 250=23.32% 00:27:28.820 cpu : usr=0.53%, sys=3.23%, ctx=1616, majf=0, minf=4097 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=7531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job6: (groupid=0, jobs=1): err= 0: pid=637380: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=813, BW=203MiB/s (213MB/s)(2042MiB/10041msec) 00:27:28.820 slat (usec): min=5, max=88568, avg=1048.78, stdev=3544.78 00:27:28.820 clat (usec): min=1876, max=187922, avg=77556.21, stdev=33687.46 00:27:28.820 lat (usec): min=1925, max=206212, avg=78604.99, stdev=34250.06 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 45], 00:27:28.820 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 93], 00:27:28.820 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 126], 00:27:28.820 | 99.00th=[ 138], 99.50th=[ 161], 99.90th=[ 165], 99.95th=[ 165], 00:27:28.820 | 99.99th=[ 188] 00:27:28.820 bw ( KiB/s): min=127488, max=428544, per=8.56%, avg=207533.00, stdev=73998.18, samples=20 00:27:28.820 iops : min= 498, max= 1674, avg=810.65, stdev=289.06, samples=20 00:27:28.820 lat (msec) : 2=0.01%, 4=0.31%, 10=2.24%, 20=2.49%, 50=18.95% 00:27:28.820 lat (msec) : 100=45.11%, 250=30.90% 00:27:28.820 cpu : usr=0.23%, sys=2.44%, ctx=1911, majf=0, minf=4097 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=8169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job7: (groupid=0, jobs=1): err= 0: pid=637389: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=1041, BW=260MiB/s (273MB/s)(2622MiB/10068msec) 00:27:28.820 slat (usec): min=5, max=98542, avg=809.39, stdev=3108.20 00:27:28.820 clat (usec): min=1714, max=226397, avg=60521.02, stdev=29815.20 00:27:28.820 lat (usec): min=1759, max=233393, avg=61330.40, stdev=30291.28 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 9], 5.00th=[ 16], 10.00th=[ 22], 20.00th=[ 32], 00:27:28.820 | 30.00th=[ 44], 40.00th=[ 51], 50.00th=[ 60], 60.00th=[ 68], 00:27:28.820 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 114], 00:27:28.820 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:27:28.820 | 99.99th=[ 169] 00:27:28.820 bw ( KiB/s): min=145408, max=460800, per=11.01%, avg=266880.00, stdev=82111.92, samples=20 00:27:28.820 iops : min= 568, max= 1800, avg=1042.50, stdev=320.75, samples=20 00:27:28.820 lat (msec) : 2=0.03%, 4=0.28%, 10=0.93%, 20=7.85%, 50=30.31% 00:27:28.820 lat (msec) : 100=50.82%, 250=9.78% 00:27:28.820 cpu : usr=0.43%, sys=3.28%, ctx=2360, majf=0, minf=4097 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=10489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job8: (groupid=0, jobs=1): err= 0: pid=637416: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=756, BW=189MiB/s (198MB/s)(1904MiB/10070msec) 00:27:28.820 slat (usec): min=6, max=48865, avg=1155.54, stdev=3471.20 00:27:28.820 clat (msec): min=2, max=171, avg=83.37, stdev=27.93 00:27:28.820 lat (msec): min=2, max=171, avg=84.52, stdev=28.34 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 10], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 57], 00:27:28.820 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 93], 00:27:28.820 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 128], 00:27:28.820 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 169], 99.95th=[ 169], 00:27:28.820 | 99.99th=[ 171] 00:27:28.820 bw ( KiB/s): min=134144, max=292352, per=7.98%, avg=193375.65, stdev=54453.83, samples=20 00:27:28.820 iops : min= 524, max= 1142, avg=755.35, stdev=212.71, samples=20 00:27:28.820 lat (msec) : 4=0.25%, 10=0.80%, 20=0.98%, 50=10.23%, 100=58.39% 00:27:28.820 lat (msec) : 250=29.35% 00:27:28.820 cpu : usr=0.29%, sys=2.23%, ctx=1716, majf=0, minf=4097 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=7616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job9: (groupid=0, jobs=1): err= 0: pid=637428: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=827, BW=207MiB/s (217MB/s)(2083MiB/10074msec) 00:27:28.820 slat (usec): min=5, max=97518, avg=1113.97, stdev=3774.77 00:27:28.820 clat (usec): min=1794, max=226304, avg=76145.71, stdev=33482.19 00:27:28.820 lat (usec): min=1845, max=226337, avg=77259.68, stdev=34015.69 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 46], 00:27:28.820 | 30.00th=[ 65], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 87], 00:27:28.820 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 130], 00:27:28.820 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 182], 00:27:28.820 | 99.99th=[ 226] 00:27:28.820 bw ( KiB/s): min=135168, max=424960, per=8.74%, avg=211686.40, stdev=78396.21, samples=20 00:27:28.820 iops : min= 528, max= 1660, avg=826.90, stdev=306.24, samples=20 00:27:28.820 lat (msec) : 2=0.01%, 4=0.29%, 10=2.50%, 20=5.20%, 50=14.20% 00:27:28.820 lat (msec) : 100=55.02%, 250=22.79% 00:27:28.820 cpu : usr=0.32%, sys=2.78%, ctx=1779, majf=0, minf=3535 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=8332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 job10: (groupid=0, jobs=1): err= 0: pid=637438: Fri Jun 7 14:29:50 2024 00:27:28.820 read: IOPS=951, BW=238MiB/s (250MB/s)(2397MiB/10073msec) 00:27:28.820 slat (usec): min=5, max=86326, avg=807.67, stdev=3003.26 00:27:28.820 clat (usec): min=1545, max=176154, avg=66321.46, stdev=39198.76 00:27:28.820 lat (usec): min=1593, max=184742, avg=67129.13, stdev=39764.84 00:27:28.820 clat percentiles (msec): 00:27:28.820 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 27], 00:27:28.820 | 30.00th=[ 34], 40.00th=[ 48], 50.00th=[ 61], 60.00th=[ 84], 00:27:28.820 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 125], 00:27:28.820 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 167], 00:27:28.820 | 99.99th=[ 176] 00:27:28.820 bw ( KiB/s): min=136704, max=494080, per=10.06%, avg=243814.40, stdev=111832.19, samples=20 00:27:28.820 iops : min= 534, max= 1930, avg=952.40, stdev=436.84, samples=20 00:27:28.820 lat (msec) : 2=0.01%, 4=0.90%, 10=3.66%, 20=7.14%, 50=30.28% 00:27:28.820 lat (msec) : 100=29.59%, 250=28.42% 00:27:28.820 cpu : usr=0.40%, sys=2.87%, ctx=2232, majf=0, minf=4097 00:27:28.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:28.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:28.820 issued rwts: total=9588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.820 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:28.820 00:27:28.820 Run status group 0 (all jobs): 00:27:28.820 READ: bw=2366MiB/s (2481MB/s), 183MiB/s-268MiB/s (192MB/s-281MB/s), io=23.3GiB (25.1GB), run=10017-10103msec 00:27:28.820 00:27:28.820 Disk stats (read/write): 00:27:28.820 nvme0n1: ios=16430/0, merge=0/0, ticks=1222524/0, in_queue=1222524, util=96.42% 00:27:28.820 nvme10n1: ios=20844/0, merge=0/0, ticks=1224866/0, in_queue=1224866, util=96.71% 00:27:28.820 nvme1n1: ios=14435/0, merge=0/0, ticks=1216955/0, in_queue=1216955, util=97.09% 00:27:28.820 nvme2n1: ios=15728/0, merge=0/0, ticks=1218298/0, in_queue=1218298, util=97.26% 00:27:28.820 nvme3n1: ios=18285/0, merge=0/0, ticks=1224496/0, in_queue=1224496, util=97.33% 00:27:28.820 nvme4n1: ios=14737/0, merge=0/0, ticks=1215564/0, in_queue=1215564, util=97.80% 00:27:28.820 nvme5n1: ios=15970/0, merge=0/0, ticks=1218295/0, in_queue=1218295, util=98.00% 00:27:28.820 nvme6n1: ios=20655/0, merge=0/0, ticks=1222675/0, in_queue=1222675, util=98.25% 00:27:28.820 nvme7n1: ios=14926/0, merge=0/0, ticks=1220088/0, in_queue=1220088, util=98.75% 00:27:28.820 nvme8n1: ios=16326/0, merge=0/0, ticks=1217947/0, in_queue=1217947, util=99.05% 00:27:28.820 nvme9n1: ios=18831/0, merge=0/0, ticks=1219067/0, in_queue=1219067, util=99.17% 00:27:28.820 14:29:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:28.820 [global] 00:27:28.820 thread=1 00:27:28.820 invalidate=1 00:27:28.820 rw=randwrite 00:27:28.820 time_based=1 00:27:28.820 runtime=10 00:27:28.820 ioengine=libaio 00:27:28.820 direct=1 00:27:28.820 bs=262144 00:27:28.820 iodepth=64 00:27:28.820 norandommap=1 00:27:28.820 numjobs=1 00:27:28.820 00:27:28.820 [job0] 00:27:28.820 filename=/dev/nvme0n1 00:27:28.820 [job1] 00:27:28.820 filename=/dev/nvme10n1 00:27:28.820 [job2] 00:27:28.821 filename=/dev/nvme1n1 00:27:28.821 [job3] 00:27:28.821 filename=/dev/nvme2n1 00:27:28.821 [job4] 00:27:28.821 filename=/dev/nvme3n1 00:27:28.821 [job5] 00:27:28.821 filename=/dev/nvme4n1 00:27:28.821 [job6] 00:27:28.821 filename=/dev/nvme5n1 00:27:28.821 [job7] 00:27:28.821 filename=/dev/nvme6n1 00:27:28.821 [job8] 00:27:28.821 filename=/dev/nvme7n1 00:27:28.821 [job9] 00:27:28.821 filename=/dev/nvme8n1 00:27:28.821 [job10] 00:27:28.821 filename=/dev/nvme9n1 00:27:28.821 Could not set queue depth (nvme0n1) 00:27:28.821 Could not set queue depth (nvme10n1) 00:27:28.821 Could not set queue depth (nvme1n1) 00:27:28.821 Could not set queue depth (nvme2n1) 00:27:28.821 Could not set queue depth (nvme3n1) 00:27:28.821 Could not set queue depth (nvme4n1) 00:27:28.821 Could not set queue depth (nvme5n1) 00:27:28.821 Could not set queue depth (nvme6n1) 00:27:28.821 Could not set queue depth (nvme7n1) 00:27:28.821 Could not set queue depth (nvme8n1) 00:27:28.821 Could not set queue depth (nvme9n1) 00:27:28.821 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:28.821 fio-3.35 00:27:28.821 Starting 11 threads 00:27:38.817 00:27:38.817 job0: (groupid=0, jobs=1): err= 0: pid=639608: Fri Jun 7 14:30:01 2024 00:27:38.817 write: IOPS=751, BW=188MiB/s (197MB/s)(1895MiB/10093msec); 0 zone resets 00:27:38.817 slat (usec): min=22, max=20548, avg=1218.17, stdev=2315.67 00:27:38.817 clat (msec): min=3, max=188, avg=83.95, stdev=21.98 00:27:38.817 lat (msec): min=3, max=188, avg=85.17, stdev=22.29 00:27:38.817 clat percentiles (msec): 00:27:38.817 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 61], 20.00th=[ 74], 00:27:38.817 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 85], 00:27:38.817 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 114], 00:27:38.817 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 176], 99.95th=[ 184], 00:27:38.817 | 99.99th=[ 190] 00:27:38.817 bw ( KiB/s): min=139264, max=267776, per=9.20%, avg=192460.80, stdev=34801.29, samples=20 00:27:38.817 iops : min= 544, max= 1046, avg=751.80, stdev=135.94, samples=20 00:27:38.817 lat (msec) : 4=0.03%, 10=0.53%, 20=1.35%, 50=4.59%, 100=66.88% 00:27:38.817 lat (msec) : 250=26.63% 00:27:38.817 cpu : usr=1.67%, sys=2.20%, ctx=2566, majf=0, minf=1 00:27:38.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:27:38.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.817 issued rwts: total=0,7581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.817 job1: (groupid=0, jobs=1): err= 0: pid=639626: Fri Jun 7 14:30:01 2024 00:27:38.817 write: IOPS=837, BW=209MiB/s (220MB/s)(2115MiB/10095msec); 0 zone resets 00:27:38.817 slat (usec): min=10, max=57080, avg=1103.15, stdev=2157.78 00:27:38.817 clat (msec): min=2, max=198, avg=75.23, stdev=21.87 00:27:38.817 lat (msec): min=2, max=198, avg=76.34, stdev=22.18 00:27:38.817 clat percentiles (msec): 00:27:38.817 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 53], 20.00th=[ 61], 00:27:38.817 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 81], 00:27:38.817 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 106], 00:27:38.817 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 186], 99.95th=[ 192], 00:27:38.817 | 99.99th=[ 199] 00:27:38.817 bw ( KiB/s): min=150016, max=297472, per=10.27%, avg=214937.60, stdev=40432.00, samples=20 00:27:38.817 iops : min= 586, max= 1162, avg=839.60, stdev=157.94, samples=20 00:27:38.817 lat (msec) : 4=0.04%, 10=0.60%, 20=1.82%, 50=5.54%, 100=80.62% 00:27:38.817 lat (msec) : 250=11.37% 00:27:38.817 cpu : usr=1.81%, sys=2.49%, ctx=2817, majf=0, minf=1 00:27:38.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:38.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.817 issued rwts: total=0,8459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.817 job2: (groupid=0, jobs=1): err= 0: pid=639636: Fri Jun 7 14:30:01 2024 00:27:38.817 write: IOPS=685, BW=171MiB/s (180MB/s)(1730MiB/10095msec); 0 zone resets 00:27:38.817 slat (usec): min=22, max=36558, avg=1332.64, stdev=2595.59 00:27:38.817 clat (msec): min=5, max=199, avg=92.02, stdev=24.11 00:27:38.817 lat (msec): min=5, max=199, avg=93.36, stdev=24.45 00:27:38.817 clat percentiles (msec): 00:27:38.817 | 1.00th=[ 22], 5.00th=[ 52], 10.00th=[ 62], 20.00th=[ 67], 00:27:38.817 | 30.00th=[ 81], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 104], 00:27:38.817 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 121], 00:27:38.817 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 186], 99.95th=[ 192], 00:27:38.817 | 99.99th=[ 201] 00:27:38.817 bw ( KiB/s): min=135680, max=257024, per=8.39%, avg=175502.30, stdev=38115.43, samples=20 00:27:38.817 iops : min= 530, max= 1004, avg=685.55, stdev=148.89, samples=20 00:27:38.817 lat (msec) : 10=0.07%, 20=0.77%, 50=3.67%, 100=43.74%, 250=51.75% 00:27:38.817 cpu : usr=1.76%, sys=2.29%, ctx=2275, majf=0, minf=1 00:27:38.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:38.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.817 issued rwts: total=0,6918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.817 job3: (groupid=0, jobs=1): err= 0: pid=639643: Fri Jun 7 14:30:01 2024 00:27:38.817 write: IOPS=617, BW=154MiB/s (162MB/s)(1559MiB/10091msec); 0 zone resets 00:27:38.817 slat (usec): min=21, max=19894, avg=1561.57, stdev=2792.68 00:27:38.817 clat (msec): min=10, max=191, avg=102.00, stdev=17.01 00:27:38.817 lat (msec): min=10, max=191, avg=103.56, stdev=17.12 00:27:38.817 clat percentiles (msec): 00:27:38.817 | 1.00th=[ 29], 5.00th=[ 62], 10.00th=[ 94], 20.00th=[ 100], 00:27:38.817 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:27:38.817 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 122], 00:27:38.817 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 178], 99.95th=[ 184], 00:27:38.817 | 99.99th=[ 192] 00:27:38.817 bw ( KiB/s): min=137216, max=224768, per=7.55%, avg=157977.60, stdev=18412.88, samples=20 00:27:38.817 iops : min= 536, max= 878, avg=617.10, stdev=71.93, samples=20 00:27:38.817 lat (msec) : 20=0.56%, 50=1.81%, 100=23.03%, 250=74.59% 00:27:38.817 cpu : usr=1.27%, sys=1.84%, ctx=1785, majf=0, minf=1 00:27:38.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:38.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.817 issued rwts: total=0,6234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.817 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.817 job4: (groupid=0, jobs=1): err= 0: pid=639647: Fri Jun 7 14:30:01 2024 00:27:38.817 write: IOPS=953, BW=238MiB/s (250MB/s)(2394MiB/10037msec); 0 zone resets 00:27:38.817 slat (usec): min=22, max=21331, avg=996.76, stdev=2031.91 00:27:38.818 clat (msec): min=3, max=139, avg=66.07, stdev=31.94 00:27:38.818 lat (msec): min=3, max=139, avg=67.07, stdev=32.41 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 17], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 40], 00:27:38.818 | 30.00th=[ 42], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 58], 00:27:38.818 | 70.00th=[ 102], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 113], 00:27:38.818 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 138], 99.95th=[ 140], 00:27:38.818 | 99.99th=[ 140] 00:27:38.818 bw ( KiB/s): min=139776, max=410112, per=11.64%, avg=243534.55, stdev=106740.23, samples=20 00:27:38.818 iops : min= 546, max= 1602, avg=951.30, stdev=416.95, samples=20 00:27:38.818 lat (msec) : 4=0.01%, 10=0.23%, 20=1.27%, 50=52.52%, 100=14.45% 00:27:38.818 lat (msec) : 250=31.51% 00:27:38.818 cpu : usr=2.56%, sys=3.22%, ctx=2822, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,9575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job5: (groupid=0, jobs=1): err= 0: pid=639658: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=578, BW=145MiB/s (152MB/s)(1459MiB/10095msec); 0 zone resets 00:27:38.818 slat (usec): min=27, max=63939, avg=1708.65, stdev=3098.37 00:27:38.818 clat (msec): min=42, max=197, avg=108.59, stdev=15.59 00:27:38.818 lat (msec): min=42, max=197, avg=110.30, stdev=15.52 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 78], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 100], 00:27:38.818 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 105], 00:27:38.818 | 70.00th=[ 107], 80.00th=[ 123], 90.00th=[ 131], 95.00th=[ 134], 00:27:38.818 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:27:38.818 | 99.99th=[ 199] 00:27:38.818 bw ( KiB/s): min=91648, max=173568, per=7.06%, avg=147788.80, stdev=18704.16, samples=20 00:27:38.818 iops : min= 358, max= 678, avg=577.30, stdev=73.06, samples=20 00:27:38.818 lat (msec) : 50=0.19%, 100=20.66%, 250=79.15% 00:27:38.818 cpu : usr=1.21%, sys=2.04%, ctx=1509, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,5836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job6: (groupid=0, jobs=1): err= 0: pid=639665: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=1212, BW=303MiB/s (318MB/s)(3056MiB/10081msec); 0 zone resets 00:27:38.818 slat (usec): min=13, max=28021, avg=773.32, stdev=1532.82 00:27:38.818 clat (msec): min=3, max=161, avg=51.99, stdev=22.30 00:27:38.818 lat (msec): min=3, max=161, avg=52.77, stdev=22.60 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:27:38.818 | 30.00th=[ 38], 40.00th=[ 39], 50.00th=[ 42], 60.00th=[ 46], 00:27:38.818 | 70.00th=[ 55], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 86], 00:27:38.818 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 153], 00:27:38.818 | 99.99th=[ 157] 00:27:38.818 bw ( KiB/s): min=193024, max=429568, per=14.88%, avg=311296.00, stdev=101504.67, samples=20 00:27:38.818 iops : min= 754, max= 1678, avg=1216.00, stdev=396.50, samples=20 00:27:38.818 lat (msec) : 4=0.01%, 10=0.28%, 20=1.27%, 50=65.37%, 100=30.02% 00:27:38.818 lat (msec) : 250=3.06% 00:27:38.818 cpu : usr=2.42%, sys=3.65%, ctx=3564, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,12223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job7: (groupid=0, jobs=1): err= 0: pid=639671: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=641, BW=160MiB/s (168MB/s)(1616MiB/10080msec); 0 zone resets 00:27:38.818 slat (usec): min=17, max=32122, avg=1419.96, stdev=2683.48 00:27:38.818 clat (msec): min=2, max=159, avg=98.36, stdev=22.00 00:27:38.818 lat (msec): min=2, max=159, avg=99.78, stdev=22.28 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 23], 5.00th=[ 56], 10.00th=[ 78], 20.00th=[ 83], 00:27:38.818 | 30.00th=[ 95], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 104], 00:27:38.818 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 129], 95.00th=[ 132], 00:27:38.818 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 155], 00:27:38.818 | 99.99th=[ 159] 00:27:38.818 bw ( KiB/s): min=124928, max=214016, per=7.83%, avg=163840.00, stdev=26018.04, samples=20 00:27:38.818 iops : min= 488, max= 836, avg=640.00, stdev=101.63, samples=20 00:27:38.818 lat (msec) : 4=0.12%, 10=0.14%, 20=0.45%, 50=3.54%, 100=38.11% 00:27:38.818 lat (msec) : 250=57.64% 00:27:38.818 cpu : usr=1.64%, sys=1.85%, ctx=2069, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,6463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job8: (groupid=0, jobs=1): err= 0: pid=639685: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=701, BW=175MiB/s (184MB/s)(1769MiB/10094msec); 0 zone resets 00:27:38.818 slat (usec): min=25, max=47461, avg=1300.59, stdev=2486.97 00:27:38.818 clat (msec): min=7, max=190, avg=89.93, stdev=20.70 00:27:38.818 lat (msec): min=7, max=190, avg=91.23, stdev=20.93 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 21], 5.00th=[ 51], 10.00th=[ 65], 20.00th=[ 79], 00:27:38.818 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 91], 60.00th=[ 101], 00:27:38.818 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 114], 00:27:38.818 | 99.00th=[ 126], 99.50th=[ 134], 99.90th=[ 178], 99.95th=[ 184], 00:27:38.818 | 99.99th=[ 192] 00:27:38.818 bw ( KiB/s): min=143360, max=240640, per=8.58%, avg=179558.40, stdev=32413.28, samples=20 00:27:38.818 iops : min= 560, max= 940, avg=701.40, stdev=126.61, samples=20 00:27:38.818 lat (msec) : 10=0.06%, 20=0.89%, 50=4.00%, 100=54.46%, 250=40.60% 00:27:38.818 cpu : usr=1.46%, sys=2.31%, ctx=2295, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,7077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job9: (groupid=0, jobs=1): err= 0: pid=639698: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=574, BW=144MiB/s (151MB/s)(1449MiB/10091msec); 0 zone resets 00:27:38.818 slat (usec): min=18, max=31341, avg=1594.02, stdev=3002.65 00:27:38.818 clat (msec): min=8, max=192, avg=109.78, stdev=16.38 00:27:38.818 lat (msec): min=8, max=192, avg=111.38, stdev=16.42 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 68], 5.00th=[ 94], 10.00th=[ 96], 20.00th=[ 101], 00:27:38.818 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 110], 00:27:38.818 | 70.00th=[ 113], 80.00th=[ 123], 90.00th=[ 131], 95.00th=[ 133], 00:27:38.818 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 186], 00:27:38.818 | 99.99th=[ 192] 00:27:38.818 bw ( KiB/s): min=108544, max=161792, per=7.01%, avg=146764.80, stdev=14719.00, samples=20 00:27:38.818 iops : min= 424, max= 632, avg=573.30, stdev=57.50, samples=20 00:27:38.818 lat (msec) : 10=0.07%, 20=0.21%, 50=0.50%, 100=17.44%, 250=81.78% 00:27:38.818 cpu : usr=1.29%, sys=1.77%, ctx=1834, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.818 issued rwts: total=0,5796,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.818 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.818 job10: (groupid=0, jobs=1): err= 0: pid=639708: Fri Jun 7 14:30:01 2024 00:27:38.818 write: IOPS=629, BW=157MiB/s (165MB/s)(1587MiB/10079msec); 0 zone resets 00:27:38.818 slat (usec): min=21, max=10485, avg=1497.95, stdev=2723.59 00:27:38.818 clat (msec): min=8, max=160, avg=100.07, stdev=21.46 00:27:38.818 lat (msec): min=9, max=160, avg=101.57, stdev=21.68 00:27:38.818 clat percentiles (msec): 00:27:38.818 | 1.00th=[ 23], 5.00th=[ 64], 10.00th=[ 79], 20.00th=[ 84], 00:27:38.818 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:27:38.818 | 70.00th=[ 105], 80.00th=[ 118], 90.00th=[ 130], 95.00th=[ 132], 00:27:38.818 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 155], 00:27:38.818 | 99.99th=[ 161] 00:27:38.818 bw ( KiB/s): min=124928, max=212480, per=7.69%, avg=160896.00, stdev=25388.33, samples=20 00:27:38.818 iops : min= 488, max= 830, avg=628.50, stdev=99.17, samples=20 00:27:38.818 lat (msec) : 10=0.02%, 20=0.66%, 50=2.69%, 100=36.26%, 250=60.37% 00:27:38.818 cpu : usr=1.27%, sys=1.97%, ctx=1958, majf=0, minf=1 00:27:38.818 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:38.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:38.819 issued rwts: total=0,6348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.819 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:38.819 00:27:38.819 Run status group 0 (all jobs): 00:27:38.819 WRITE: bw=2043MiB/s (2143MB/s), 144MiB/s-303MiB/s (151MB/s-318MB/s), io=20.1GiB (21.6GB), run=10037-10095msec 00:27:38.819 00:27:38.819 Disk stats (read/write): 00:27:38.819 nvme0n1: ios=51/15154, merge=0/0, ticks=1066/1231628, in_queue=1232694, util=100.00% 00:27:38.819 nvme10n1: ios=54/16910, merge=0/0, ticks=920/1230609, in_queue=1231529, util=100.00% 00:27:38.819 nvme1n1: ios=48/13829, merge=0/0, ticks=101/1231147, in_queue=1231248, util=97.35% 00:27:38.819 nvme2n1: ios=13/12465, merge=0/0, ticks=346/1229012, in_queue=1229358, util=97.45% 00:27:38.819 nvme3n1: ios=0/18612, merge=0/0, ticks=0/1201075, in_queue=1201075, util=97.26% 00:27:38.819 nvme4n1: ios=45/11663, merge=0/0, ticks=833/1222512, in_queue=1223345, util=100.00% 00:27:38.819 nvme5n1: ios=28/24072, merge=0/0, ticks=26/1199470, in_queue=1199496, util=98.03% 00:27:38.819 nvme6n1: ios=46/12552, merge=0/0, ticks=783/1201556, in_queue=1202339, util=99.90% 00:27:38.819 nvme7n1: ios=37/14150, merge=0/0, ticks=836/1229091, in_queue=1229927, util=99.92% 00:27:38.819 nvme8n1: ios=40/11588, merge=0/0, ticks=733/1230830, in_queue=1231563, util=99.94% 00:27:38.819 nvme9n1: ios=44/12325, merge=0/0, ticks=712/1200427, in_queue=1201139, util=99.93% 00:27:38.819 14:30:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:38.819 14:30:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:38.819 14:30:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.819 14:30:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:38.819 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.819 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:39.391 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.391 14:30:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:39.660 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:39.660 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:39.660 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.661 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:39.920 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.920 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:40.180 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.180 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.181 14:30:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.181 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.181 14:30:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:40.440 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.440 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.441 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.441 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.441 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:40.700 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.700 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:40.960 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:40.960 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.960 rmmod nvme_tcp 00:27:40.960 rmmod nvme_fabrics 00:27:40.960 rmmod nvme_keyring 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 628762 ']' 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 628762 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 628762 ']' 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 628762 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:40.960 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 628762 00:27:41.221 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:41.221 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:41.221 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 628762' 00:27:41.221 killing process with pid 628762 00:27:41.221 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 628762 00:27:41.221 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 628762 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.481 14:30:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.391 14:30:06 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.391 00:27:43.391 real 1m17.937s 00:27:43.391 user 4m52.175s 00:27:43.391 sys 0m24.153s 00:27:43.391 14:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:43.391 14:30:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:43.391 ************************************ 00:27:43.391 END TEST nvmf_multiconnection 00:27:43.391 ************************************ 00:27:43.391 14:30:07 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:43.391 14:30:07 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:43.391 14:30:07 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:43.391 14:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.652 ************************************ 00:27:43.652 START TEST nvmf_initiator_timeout 00:27:43.652 ************************************ 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:43.652 * Looking for test storage... 00:27:43.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.652 14:30:07 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:51.789 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.789 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:51.790 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:51.790 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:51.790 Found net devices under 0000:31:00.0: cvl_0_0 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:51.790 Found net devices under 0000:31:00.1: cvl_0_1 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.790 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.050 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:27:52.051 00:27:52.051 --- 10.0.0.2 ping statistics --- 00:27:52.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.051 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:27:52.051 00:27:52.051 --- 10.0.0.1 ping statistics --- 00:27:52.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.051 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=647474 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 647474 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 647474 ']' 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:52.051 14:30:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.051 [2024-06-07 14:30:15.616765] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:27:52.051 [2024-06-07 14:30:15.616828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.051 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.051 [2024-06-07 14:30:15.694523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.311 [2024-06-07 14:30:15.735073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.311 [2024-06-07 14:30:15.735114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.311 [2024-06-07 14:30:15.735122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.311 [2024-06-07 14:30:15.735129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.311 [2024-06-07 14:30:15.735134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.311 [2024-06-07 14:30:15.735236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.311 [2024-06-07 14:30:15.735362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.311 [2024-06-07 14:30:15.735526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.311 [2024-06-07 14:30:15.735527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 Malloc0 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 Delay0 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 [2024-06-07 14:30:16.474907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 [2024-06-07 14:30:16.515140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.881 14:30:16 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:54.790 14:30:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:54.790 14:30:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:27:54.790 14:30:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:27:54.790 14:30:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:27:54.790 14:30:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=648290 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:56.731 14:30:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:56.731 [global] 00:27:56.731 thread=1 00:27:56.731 invalidate=1 00:27:56.731 rw=write 00:27:56.731 time_based=1 00:27:56.731 runtime=60 00:27:56.731 ioengine=libaio 00:27:56.731 direct=1 00:27:56.731 bs=4096 00:27:56.731 iodepth=1 00:27:56.731 norandommap=0 00:27:56.731 numjobs=1 00:27:56.731 00:27:56.731 verify_dump=1 00:27:56.731 verify_backlog=512 00:27:56.731 verify_state_save=0 00:27:56.731 do_verify=1 00:27:56.731 verify=crc32c-intel 00:27:56.731 [job0] 00:27:56.731 filename=/dev/nvme0n1 00:27:56.731 Could not set queue depth (nvme0n1) 00:27:56.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:56.993 fio-3.35 00:27:56.993 Starting 1 thread 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 true 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 true 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 true 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:59.590 true 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.590 14:30:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 true 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 true 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 true 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.898 true 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:02.898 14:30:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 648290 00:28:59.163 00:28:59.163 job0: (groupid=0, jobs=1): err= 0: pid=648615: Fri Jun 7 14:31:20 2024 00:28:59.163 read: IOPS=73, BW=294KiB/s (301kB/s)(17.2MiB/60008msec) 00:28:59.163 slat (usec): min=3, max=14196, avg=27.64, stdev=241.14 00:28:59.163 clat (usec): min=304, max=41917k, avg=12946.04, stdev=631021.56 00:28:59.163 lat (usec): min=311, max=41917k, avg=12973.68, stdev=631021.49 00:28:59.163 clat percentiles (usec): 00:28:59.163 | 1.00th=[ 502], 5.00th=[ 594], 10.00th=[ 635], 00:28:59.163 | 20.00th=[ 685], 30.00th=[ 725], 40.00th=[ 758], 00:28:59.163 | 50.00th=[ 791], 60.00th=[ 824], 70.00th=[ 840], 00:28:59.163 | 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 41157], 00:28:59.163 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:59.163 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:59.163 write: IOPS=76, BW=307KiB/s (315kB/s)(18.0MiB/60008msec); 0 zone resets 00:28:59.163 slat (nsec): min=8780, max=76692, avg=30376.51, stdev=9421.38 00:28:59.163 clat (usec): min=185, max=862, avg=552.85, stdev=104.49 00:28:59.163 lat (usec): min=194, max=913, avg=583.22, stdev=108.53 00:28:59.163 clat percentiles (usec): 00:28:59.163 | 1.00th=[ 306], 5.00th=[ 347], 10.00th=[ 420], 20.00th=[ 453], 00:28:59.163 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 578], 00:28:59.163 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 709], 00:28:59.163 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 824], 00:28:59.163 | 99.99th=[ 865] 00:28:59.163 bw ( KiB/s): min= 1104, max= 4096, per=100.00%, avg=2835.69, stdev=1136.26, samples=13 00:28:59.163 iops : min= 276, max= 1024, avg=708.92, stdev=284.06, samples=13 00:28:59.163 lat (usec) : 250=0.18%, 500=14.05%, 750=54.63%, 1000=27.81% 00:28:59.163 lat (msec) : 2=0.08%, 50=3.25%, >=2000=0.01% 00:28:59.163 cpu : usr=0.26%, sys=0.58%, ctx=9025, majf=0, minf=1 00:28:59.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:59.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:59.163 issued rwts: total=4413,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:59.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:59.163 00:28:59.163 Run status group 0 (all jobs): 00:28:59.163 READ: bw=294KiB/s (301kB/s), 294KiB/s-294KiB/s (301kB/s-301kB/s), io=17.2MiB (18.1MB), run=60008-60008msec 00:28:59.163 WRITE: bw=307KiB/s (315kB/s), 307KiB/s-307KiB/s (315kB/s-315kB/s), io=18.0MiB (18.9MB), run=60008-60008msec 00:28:59.163 00:28:59.163 Disk stats (read/write): 00:28:59.163 nvme0n1: ios=4509/4608, merge=0/0, ticks=15823/1914, in_queue=17737, util=99.79% 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:59.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:59.163 nvmf hotplug test: fio successful as expected 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.163 rmmod nvme_tcp 00:28:59.163 rmmod nvme_fabrics 00:28:59.163 rmmod nvme_keyring 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 647474 ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 647474 ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 647474' 00:28:59.163 killing process with pid 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 647474 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.163 14:31:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.424 14:31:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:59.424 00:28:59.424 real 1m15.961s 00:28:59.424 user 4m37.014s 00:28:59.424 sys 0m8.027s 00:28:59.424 14:31:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:59.424 14:31:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:59.424 ************************************ 00:28:59.424 END TEST nvmf_initiator_timeout 00:28:59.424 ************************************ 00:28:59.424 14:31:23 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:59.424 14:31:23 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:59.424 14:31:23 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:59.424 14:31:23 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.424 14:31:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:07.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:07.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:07.561 Found net devices under 0000:31:00.0: cvl_0_0 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:07.561 Found net devices under 0000:31:00.1: cvl_0_1 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:29:07.561 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:07.561 14:31:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:07.562 14:31:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.562 14:31:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.562 ************************************ 00:29:07.562 START TEST nvmf_perf_adq 00:29:07.562 ************************************ 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:07.562 * Looking for test storage... 00:29:07.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.562 14:31:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:07.562 14:31:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:15.703 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:15.703 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:15.703 Found net devices under 0000:31:00.0: cvl_0_0 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:15.703 Found net devices under 0000:31:00.1: cvl_0_1 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:29:15.703 14:31:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:17.086 14:31:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:19.041 14:31:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:24.337 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:24.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:24.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:24.338 Found net devices under 0000:31:00.0: cvl_0_0 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:24.338 Found net devices under 0000:31:00.1: cvl_0_1 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:24.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:29:24.338 00:29:24.338 --- 10.0.0.2 ping statistics --- 00:29:24.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.338 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:29:24.338 00:29:24.338 --- 10.0.0.1 ping statistics --- 00:29:24.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.338 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:24.338 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=670366 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 670366 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 670366 ']' 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:24.339 14:31:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.339 [2024-06-07 14:31:47.698609] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:29:24.339 [2024-06-07 14:31:47.698670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.339 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.339 [2024-06-07 14:31:47.776352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.339 [2024-06-07 14:31:47.816083] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.339 [2024-06-07 14:31:47.816122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.339 [2024-06-07 14:31:47.816130] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.339 [2024-06-07 14:31:47.816136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.339 [2024-06-07 14:31:47.816142] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.339 [2024-06-07 14:31:47.816232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.339 [2024-06-07 14:31:47.816325] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.339 [2024-06-07 14:31:47.816463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.339 [2024-06-07 14:31:47.816464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.910 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.170 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 [2024-06-07 14:31:48.657104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 Malloc1 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:25.171 [2024-06-07 14:31:48.716410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=670716 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:29:25.171 14:31:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:25.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.714 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:29:27.714 14:31:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:27.714 14:31:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:27.714 14:31:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:27.714 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:29:27.714 "tick_rate": 2400000000, 00:29:27.714 "poll_groups": [ 00:29:27.714 { 00:29:27.714 "name": "nvmf_tgt_poll_group_000", 00:29:27.714 "admin_qpairs": 1, 00:29:27.715 "io_qpairs": 1, 00:29:27.715 "current_admin_qpairs": 1, 00:29:27.715 "current_io_qpairs": 1, 00:29:27.715 "pending_bdev_io": 0, 00:29:27.715 "completed_nvme_io": 20328, 00:29:27.715 "transports": [ 00:29:27.715 { 00:29:27.715 "trtype": "TCP" 00:29:27.715 } 00:29:27.715 ] 00:29:27.715 }, 00:29:27.715 { 00:29:27.715 "name": "nvmf_tgt_poll_group_001", 00:29:27.715 "admin_qpairs": 0, 00:29:27.715 "io_qpairs": 1, 00:29:27.715 "current_admin_qpairs": 0, 00:29:27.715 "current_io_qpairs": 1, 00:29:27.715 "pending_bdev_io": 0, 00:29:27.715 "completed_nvme_io": 28763, 00:29:27.715 "transports": [ 00:29:27.715 { 00:29:27.715 "trtype": "TCP" 00:29:27.715 } 00:29:27.715 ] 00:29:27.715 }, 00:29:27.715 { 00:29:27.715 "name": "nvmf_tgt_poll_group_002", 00:29:27.715 "admin_qpairs": 0, 00:29:27.715 "io_qpairs": 1, 00:29:27.715 "current_admin_qpairs": 0, 00:29:27.715 "current_io_qpairs": 1, 00:29:27.715 "pending_bdev_io": 0, 00:29:27.715 "completed_nvme_io": 22192, 00:29:27.715 "transports": [ 00:29:27.715 { 00:29:27.715 "trtype": "TCP" 00:29:27.715 } 00:29:27.715 ] 00:29:27.715 }, 00:29:27.715 { 00:29:27.715 "name": "nvmf_tgt_poll_group_003", 00:29:27.715 "admin_qpairs": 0, 00:29:27.715 "io_qpairs": 1, 00:29:27.715 "current_admin_qpairs": 0, 00:29:27.715 "current_io_qpairs": 1, 00:29:27.715 "pending_bdev_io": 0, 00:29:27.715 "completed_nvme_io": 20992, 00:29:27.715 "transports": [ 00:29:27.715 { 00:29:27.715 "trtype": "TCP" 00:29:27.715 } 00:29:27.715 ] 00:29:27.715 } 00:29:27.715 ] 00:29:27.715 }' 00:29:27.715 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:27.715 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:29:27.715 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:29:27.715 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:29:27.715 14:31:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 670716 00:29:35.856 Initializing NVMe Controllers 00:29:35.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:35.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:35.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:35.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:35.856 Initialization complete. Launching workers. 00:29:35.856 ======================================================== 00:29:35.856 Latency(us) 00:29:35.856 Device Information : IOPS MiB/s Average min max 00:29:35.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11456.60 44.75 5587.87 1470.36 8843.21 00:29:35.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14820.90 57.89 4317.81 1097.92 10008.02 00:29:35.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14592.40 57.00 4385.55 1452.12 10292.96 00:29:35.856 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13586.40 53.07 4711.08 1223.36 10533.46 00:29:35.856 ======================================================== 00:29:35.856 Total : 54456.30 212.72 4701.28 1097.92 10533.46 00:29:35.856 00:29:35.856 [2024-06-07 14:31:58.833930] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2490db0 is same with the state(5) to be set 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:35.856 rmmod nvme_tcp 00:29:35.856 rmmod nvme_fabrics 00:29:35.856 rmmod nvme_keyring 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 670366 ']' 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 670366 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 670366 ']' 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 670366 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 670366 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 670366' 00:29:35.856 killing process with pid 670366 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 670366 00:29:35.856 14:31:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 670366 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.856 14:31:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.770 14:32:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.770 14:32:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:29:37.770 14:32:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:29:39.152 14:32:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:29:41.063 14:32:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:46.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:46.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:46.357 Found net devices under 0000:31:00.0: cvl_0_0 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:46.357 Found net devices under 0000:31:00.1: cvl_0_1 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.357 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:29:46.358 00:29:46.358 --- 10.0.0.2 ping statistics --- 00:29:46.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.358 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:29:46.358 00:29:46.358 --- 10.0.0.1 ping statistics --- 00:29:46.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.358 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:46.358 net.core.busy_poll = 1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:46.358 net.core.busy_read = 1 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:46.358 14:32:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=675170 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 675170 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 675170 ']' 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:46.620 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:46.620 [2024-06-07 14:32:10.153664] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:29:46.620 [2024-06-07 14:32:10.153754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.620 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.620 [2024-06-07 14:32:10.237282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.882 [2024-06-07 14:32:10.277865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.882 [2024-06-07 14:32:10.277910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.882 [2024-06-07 14:32:10.277917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.882 [2024-06-07 14:32:10.277924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.882 [2024-06-07 14:32:10.277930] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.882 [2024-06-07 14:32:10.278068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.882 [2024-06-07 14:32:10.278202] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.882 [2024-06-07 14:32:10.278343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.882 [2024-06-07 14:32:10.278477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.457 14:32:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.457 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.718 [2024-06-07 14:32:11.109407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.718 Malloc1 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.718 [2024-06-07 14:32:11.168723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=675268 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:47.718 14:32:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:47.718 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:49.663 "tick_rate": 2400000000, 00:29:49.663 "poll_groups": [ 00:29:49.663 { 00:29:49.663 "name": "nvmf_tgt_poll_group_000", 00:29:49.663 "admin_qpairs": 1, 00:29:49.663 "io_qpairs": 1, 00:29:49.663 "current_admin_qpairs": 1, 00:29:49.663 "current_io_qpairs": 1, 00:29:49.663 "pending_bdev_io": 0, 00:29:49.663 "completed_nvme_io": 28980, 00:29:49.663 "transports": [ 00:29:49.663 { 00:29:49.663 "trtype": "TCP" 00:29:49.663 } 00:29:49.663 ] 00:29:49.663 }, 00:29:49.663 { 00:29:49.663 "name": "nvmf_tgt_poll_group_001", 00:29:49.663 "admin_qpairs": 0, 00:29:49.663 "io_qpairs": 3, 00:29:49.663 "current_admin_qpairs": 0, 00:29:49.663 "current_io_qpairs": 3, 00:29:49.663 "pending_bdev_io": 0, 00:29:49.663 "completed_nvme_io": 41942, 00:29:49.663 "transports": [ 00:29:49.663 { 00:29:49.663 "trtype": "TCP" 00:29:49.663 } 00:29:49.663 ] 00:29:49.663 }, 00:29:49.663 { 00:29:49.663 "name": "nvmf_tgt_poll_group_002", 00:29:49.663 "admin_qpairs": 0, 00:29:49.663 "io_qpairs": 0, 00:29:49.663 "current_admin_qpairs": 0, 00:29:49.663 "current_io_qpairs": 0, 00:29:49.663 "pending_bdev_io": 0, 00:29:49.663 "completed_nvme_io": 0, 00:29:49.663 "transports": [ 00:29:49.663 { 00:29:49.663 "trtype": "TCP" 00:29:49.663 } 00:29:49.663 ] 00:29:49.663 }, 00:29:49.663 { 00:29:49.663 "name": "nvmf_tgt_poll_group_003", 00:29:49.663 "admin_qpairs": 0, 00:29:49.663 "io_qpairs": 0, 00:29:49.663 "current_admin_qpairs": 0, 00:29:49.663 "current_io_qpairs": 0, 00:29:49.663 "pending_bdev_io": 0, 00:29:49.663 "completed_nvme_io": 0, 00:29:49.663 "transports": [ 00:29:49.663 { 00:29:49.663 "trtype": "TCP" 00:29:49.663 } 00:29:49.663 ] 00:29:49.663 } 00:29:49.663 ] 00:29:49.663 }' 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:49.663 14:32:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 675268 00:29:57.807 Initializing NVMe Controllers 00:29:57.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:57.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:57.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:57.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:57.807 Initialization complete. Launching workers. 00:29:57.807 ======================================================== 00:29:57.807 Latency(us) 00:29:57.807 Device Information : IOPS MiB/s Average min max 00:29:57.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7715.60 30.14 8315.21 1114.96 54341.25 00:29:57.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7298.90 28.51 8796.26 1438.91 54975.07 00:29:57.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6661.00 26.02 9609.62 1510.70 54642.62 00:29:57.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 19163.20 74.86 3339.49 1309.27 9946.71 00:29:57.807 ======================================================== 00:29:57.807 Total : 40838.70 159.53 6277.50 1114.96 54975.07 00:29:57.807 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:57.807 rmmod nvme_tcp 00:29:57.807 rmmod nvme_fabrics 00:29:57.807 rmmod nvme_keyring 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 675170 ']' 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 675170 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 675170 ']' 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 675170 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:57.807 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 675170 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 675170' 00:29:58.068 killing process with pid 675170 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 675170 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 675170 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.068 14:32:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.615 14:32:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:00.615 14:32:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:00.615 00:30:00.615 real 0m52.825s 00:30:00.615 user 2m49.875s 00:30:00.615 sys 0m11.151s 00:30:00.615 14:32:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:00.615 14:32:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:30:00.615 ************************************ 00:30:00.615 END TEST nvmf_perf_adq 00:30:00.615 ************************************ 00:30:00.615 14:32:23 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:00.615 14:32:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:00.615 14:32:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:00.615 14:32:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:00.615 ************************************ 00:30:00.615 START TEST nvmf_shutdown 00:30:00.615 ************************************ 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:30:00.615 * Looking for test storage... 00:30:00.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:00.615 ************************************ 00:30:00.615 START TEST nvmf_shutdown_tc1 00:30:00.615 ************************************ 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:30:00.615 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:00.616 14:32:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.758 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:08.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:08.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:08.759 Found net devices under 0000:31:00.0: cvl_0_0 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:08.759 Found net devices under 0000:31:00.1: cvl_0_1 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:08.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:30:08.759 00:30:08.759 --- 10.0.0.2 ping statistics --- 00:30:08.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.759 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:08.759 00:30:08.759 --- 10.0.0.1 ping statistics --- 00:30:08.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.759 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:08.759 14:32:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=682004 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 682004 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:08.759 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 682004 ']' 00:30:08.760 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.760 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:08.760 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.760 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:08.760 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:08.760 [2024-06-07 14:32:32.076933] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:08.760 [2024-06-07 14:32:32.076985] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.760 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.760 [2024-06-07 14:32:32.166971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.760 [2024-06-07 14:32:32.201179] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.760 [2024-06-07 14:32:32.201220] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.760 [2024-06-07 14:32:32.201228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.760 [2024-06-07 14:32:32.201234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.760 [2024-06-07 14:32:32.201240] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.760 [2024-06-07 14:32:32.204210] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.760 [2024-06-07 14:32:32.204347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.760 [2024-06-07 14:32:32.204501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:08.760 [2024-06-07 14:32:32.204501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.333 [2024-06-07 14:32:32.883712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:09.333 14:32:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.333 Malloc1 00:30:09.594 [2024-06-07 14:32:32.986907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.594 Malloc2 00:30:09.594 Malloc3 00:30:09.594 Malloc4 00:30:09.594 Malloc5 00:30:09.594 Malloc6 00:30:09.594 Malloc7 00:30:09.856 Malloc8 00:30:09.856 Malloc9 00:30:09.856 Malloc10 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=682383 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 682383 /var/tmp/bdevperf.sock 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 682383 ']' 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.856 { 00:30:09.856 "params": { 00:30:09.856 "name": "Nvme$subsystem", 00:30:09.856 "trtype": "$TEST_TRANSPORT", 00:30:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.856 "adrfam": "ipv4", 00:30:09.856 "trsvcid": "$NVMF_PORT", 00:30:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.856 "hdgst": ${hdgst:-false}, 00:30:09.856 "ddgst": ${ddgst:-false} 00:30:09.856 }, 00:30:09.856 "method": "bdev_nvme_attach_controller" 00:30:09.856 } 00:30:09.856 EOF 00:30:09.856 )") 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.856 { 00:30:09.856 "params": { 00:30:09.856 "name": "Nvme$subsystem", 00:30:09.856 "trtype": "$TEST_TRANSPORT", 00:30:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.856 "adrfam": "ipv4", 00:30:09.856 "trsvcid": "$NVMF_PORT", 00:30:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.856 "hdgst": ${hdgst:-false}, 00:30:09.856 "ddgst": ${ddgst:-false} 00:30:09.856 }, 00:30:09.856 "method": "bdev_nvme_attach_controller" 00:30:09.856 } 00:30:09.856 EOF 00:30:09.856 )") 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.856 { 00:30:09.856 "params": { 00:30:09.856 "name": "Nvme$subsystem", 00:30:09.856 "trtype": "$TEST_TRANSPORT", 00:30:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.856 "adrfam": "ipv4", 00:30:09.856 "trsvcid": "$NVMF_PORT", 00:30:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.856 "hdgst": ${hdgst:-false}, 00:30:09.856 "ddgst": ${ddgst:-false} 00:30:09.856 }, 00:30:09.856 "method": "bdev_nvme_attach_controller" 00:30:09.856 } 00:30:09.856 EOF 00:30:09.856 )") 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.856 { 00:30:09.856 "params": { 00:30:09.856 "name": "Nvme$subsystem", 00:30:09.856 "trtype": "$TEST_TRANSPORT", 00:30:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.856 "adrfam": "ipv4", 00:30:09.856 "trsvcid": "$NVMF_PORT", 00:30:09.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.856 "hdgst": ${hdgst:-false}, 00:30:09.856 "ddgst": ${ddgst:-false} 00:30:09.856 }, 00:30:09.856 "method": "bdev_nvme_attach_controller" 00:30:09.856 } 00:30:09.856 EOF 00:30:09.856 )") 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.856 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.856 { 00:30:09.856 "params": { 00:30:09.856 "name": "Nvme$subsystem", 00:30:09.856 "trtype": "$TEST_TRANSPORT", 00:30:09.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.856 "adrfam": "ipv4", 00:30:09.856 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.857 { 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme$subsystem", 00:30:09.857 "trtype": "$TEST_TRANSPORT", 00:30:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.857 { 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme$subsystem", 00:30:09.857 "trtype": "$TEST_TRANSPORT", 00:30:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 [2024-06-07 14:32:33.442752] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:09.857 [2024-06-07 14:32:33.442826] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.857 { 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme$subsystem", 00:30:09.857 "trtype": "$TEST_TRANSPORT", 00:30:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.857 { 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme$subsystem", 00:30:09.857 "trtype": "$TEST_TRANSPORT", 00:30:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:09.857 { 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme$subsystem", 00:30:09.857 "trtype": "$TEST_TRANSPORT", 00:30:09.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "$NVMF_PORT", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.857 "hdgst": ${hdgst:-false}, 00:30:09.857 "ddgst": ${ddgst:-false} 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 } 00:30:09.857 EOF 00:30:09.857 )") 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:09.857 14:32:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme1", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme2", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme3", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme4", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme5", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme6", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme7", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme8", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme9", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 },{ 00:30:09.857 "params": { 00:30:09.857 "name": "Nvme10", 00:30:09.857 "trtype": "tcp", 00:30:09.857 "traddr": "10.0.0.2", 00:30:09.857 "adrfam": "ipv4", 00:30:09.857 "trsvcid": "4420", 00:30:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:09.857 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:09.857 "hdgst": false, 00:30:09.857 "ddgst": false 00:30:09.857 }, 00:30:09.857 "method": "bdev_nvme_attach_controller" 00:30:09.857 }' 00:30:09.857 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.118 [2024-06-07 14:32:33.511710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.118 [2024-06-07 14:32:33.543336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 682383 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:30:11.505 14:32:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:30:12.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 682383 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 682004 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.448 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.448 { 00:30:12.448 "params": { 00:30:12.448 "name": "Nvme$subsystem", 00:30:12.448 "trtype": "$TEST_TRANSPORT", 00:30:12.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.448 "adrfam": "ipv4", 00:30:12.448 "trsvcid": "$NVMF_PORT", 00:30:12.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.448 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 [2024-06-07 14:32:35.989155] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:12.449 [2024-06-07 14:32:35.989239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682763 ] 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:12.449 { 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme$subsystem", 00:30:12.449 "trtype": "$TEST_TRANSPORT", 00:30:12.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "$NVMF_PORT", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.449 "hdgst": ${hdgst:-false}, 00:30:12.449 "ddgst": ${ddgst:-false} 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 } 00:30:12.449 EOF 00:30:12.449 )") 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:30:12.449 14:32:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme1", 00:30:12.449 "trtype": "tcp", 00:30:12.449 "traddr": "10.0.0.2", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "4420", 00:30:12.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.449 "hdgst": false, 00:30:12.449 "ddgst": false 00:30:12.449 }, 00:30:12.449 "method": "bdev_nvme_attach_controller" 00:30:12.449 },{ 00:30:12.449 "params": { 00:30:12.449 "name": "Nvme2", 00:30:12.449 "trtype": "tcp", 00:30:12.449 "traddr": "10.0.0.2", 00:30:12.449 "adrfam": "ipv4", 00:30:12.449 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme3", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme4", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme5", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme6", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme7", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme8", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme9", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 },{ 00:30:12.450 "params": { 00:30:12.450 "name": "Nvme10", 00:30:12.450 "trtype": "tcp", 00:30:12.450 "traddr": "10.0.0.2", 00:30:12.450 "adrfam": "ipv4", 00:30:12.450 "trsvcid": "4420", 00:30:12.450 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:12.450 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:12.450 "hdgst": false, 00:30:12.450 "ddgst": false 00:30:12.450 }, 00:30:12.450 "method": "bdev_nvme_attach_controller" 00:30:12.450 }' 00:30:12.450 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.450 [2024-06-07 14:32:36.054575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.450 [2024-06-07 14:32:36.085989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.910 Running I/O for 1 seconds... 00:30:15.295 00:30:15.295 Latency(us) 00:30:15.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.295 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme1n1 : 1.13 226.44 14.15 0.00 0.00 279786.45 16602.45 246415.36 00:30:15.295 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme2n1 : 1.14 224.22 14.01 0.00 0.00 277903.36 17803.95 255153.49 00:30:15.295 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme3n1 : 1.12 229.08 14.32 0.00 0.00 267142.61 15291.73 241172.48 00:30:15.295 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme4n1 : 1.11 229.82 14.36 0.00 0.00 261410.77 20753.07 244667.73 00:30:15.295 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme5n1 : 1.12 228.59 14.29 0.00 0.00 258148.69 19660.80 260396.37 00:30:15.295 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme6n1 : 1.13 227.42 14.21 0.00 0.00 254906.24 17913.17 246415.36 00:30:15.295 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme7n1 : 1.16 275.09 17.19 0.00 0.00 206853.97 14745.60 239424.85 00:30:15.295 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme8n1 : 1.17 273.96 17.12 0.00 0.00 203546.28 23156.05 241172.48 00:30:15.295 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme9n1 : 1.17 282.50 17.66 0.00 0.00 193532.39 2594.13 246415.36 00:30:15.295 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:15.295 Verification LBA range: start 0x0 length 0x400 00:30:15.295 Nvme10n1 : 1.18 274.68 17.17 0.00 0.00 196810.58 1727.15 267386.88 00:30:15.295 =================================================================================================================== 00:30:15.295 Total : 2471.79 154.49 0.00 0.00 236147.97 1727.15 267386.88 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.295 rmmod nvme_tcp 00:30:15.295 rmmod nvme_fabrics 00:30:15.295 rmmod nvme_keyring 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 682004 ']' 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 682004 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 682004 ']' 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 682004 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 682004 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 682004' 00:30:15.295 killing process with pid 682004 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 682004 00:30:15.295 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 682004 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.557 14:32:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.469 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:17.469 00:30:17.469 real 0m17.096s 00:30:17.469 user 0m33.252s 00:30:17.469 sys 0m7.068s 00:30:17.469 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:17.469 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:17.469 ************************************ 00:30:17.469 END TEST nvmf_shutdown_tc1 00:30:17.469 ************************************ 00:30:17.469 14:32:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:17.470 14:32:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:17.470 14:32:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:17.470 14:32:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:17.731 ************************************ 00:30:17.731 START TEST nvmf_shutdown_tc2 00:30:17.731 ************************************ 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.731 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:17.732 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:17.732 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:17.732 Found net devices under 0000:31:00.0: cvl_0_0 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:17.732 Found net devices under 0000:31:00.1: cvl_0_1 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.732 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.724 ms 00:30:17.994 00:30:17.994 --- 10.0.0.2 ping statistics --- 00:30:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.994 rtt min/avg/max/mdev = 0.724/0.724/0.724/0.000 ms 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:30:17.994 00:30:17.994 --- 10.0.0.1 ping statistics --- 00:30:17.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.994 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=683941 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 683941 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 683941 ']' 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:17.994 14:32:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.994 [2024-06-07 14:32:41.591823] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:17.994 [2024-06-07 14:32:41.591889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.994 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.256 [2024-06-07 14:32:41.685676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.256 [2024-06-07 14:32:41.720259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.256 [2024-06-07 14:32:41.720296] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.256 [2024-06-07 14:32:41.720302] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.256 [2024-06-07 14:32:41.720306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.256 [2024-06-07 14:32:41.720310] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.256 [2024-06-07 14:32:41.720446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.256 [2024-06-07 14:32:41.720700] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.256 [2024-06-07 14:32:41.720824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.256 [2024-06-07 14:32:41.720824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.829 [2024-06-07 14:32:42.416542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:18.829 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:30:19.089 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:19.089 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.089 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.089 Malloc1 00:30:19.089 [2024-06-07 14:32:42.515071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.089 Malloc2 00:30:19.089 Malloc3 00:30:19.089 Malloc4 00:30:19.089 Malloc5 00:30:19.089 Malloc6 00:30:19.089 Malloc7 00:30:19.369 Malloc8 00:30:19.369 Malloc9 00:30:19.369 Malloc10 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=684256 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 684256 /var/tmp/bdevperf.sock 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 684256 ']' 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.369 "params": { 00:30:19.369 "name": "Nvme$subsystem", 00:30:19.369 "trtype": "$TEST_TRANSPORT", 00:30:19.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.369 "adrfam": "ipv4", 00:30:19.369 "trsvcid": "$NVMF_PORT", 00:30:19.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.369 "hdgst": ${hdgst:-false}, 00:30:19.369 "ddgst": ${ddgst:-false} 00:30:19.369 }, 00:30:19.369 "method": "bdev_nvme_attach_controller" 00:30:19.369 } 00:30:19.369 EOF 00:30:19.369 )") 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.369 "params": { 00:30:19.369 "name": "Nvme$subsystem", 00:30:19.369 "trtype": "$TEST_TRANSPORT", 00:30:19.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.369 "adrfam": "ipv4", 00:30:19.369 "trsvcid": "$NVMF_PORT", 00:30:19.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.369 "hdgst": ${hdgst:-false}, 00:30:19.369 "ddgst": ${ddgst:-false} 00:30:19.369 }, 00:30:19.369 "method": "bdev_nvme_attach_controller" 00:30:19.369 } 00:30:19.369 EOF 00:30:19.369 )") 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.369 "params": { 00:30:19.369 "name": "Nvme$subsystem", 00:30:19.369 "trtype": "$TEST_TRANSPORT", 00:30:19.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.369 "adrfam": "ipv4", 00:30:19.369 "trsvcid": "$NVMF_PORT", 00:30:19.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.369 "hdgst": ${hdgst:-false}, 00:30:19.369 "ddgst": ${ddgst:-false} 00:30:19.369 }, 00:30:19.369 "method": "bdev_nvme_attach_controller" 00:30:19.369 } 00:30:19.369 EOF 00:30:19.369 )") 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.369 "params": { 00:30:19.369 "name": "Nvme$subsystem", 00:30:19.369 "trtype": "$TEST_TRANSPORT", 00:30:19.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.369 "adrfam": "ipv4", 00:30:19.369 "trsvcid": "$NVMF_PORT", 00:30:19.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.369 "hdgst": ${hdgst:-false}, 00:30:19.369 "ddgst": ${ddgst:-false} 00:30:19.369 }, 00:30:19.369 "method": "bdev_nvme_attach_controller" 00:30:19.369 } 00:30:19.369 EOF 00:30:19.369 )") 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.369 "params": { 00:30:19.369 "name": "Nvme$subsystem", 00:30:19.369 "trtype": "$TEST_TRANSPORT", 00:30:19.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.369 "adrfam": "ipv4", 00:30:19.369 "trsvcid": "$NVMF_PORT", 00:30:19.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.369 "hdgst": ${hdgst:-false}, 00:30:19.369 "ddgst": ${ddgst:-false} 00:30:19.369 }, 00:30:19.369 "method": "bdev_nvme_attach_controller" 00:30:19.369 } 00:30:19.369 EOF 00:30:19.369 )") 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.369 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.369 { 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme$subsystem", 00:30:19.370 "trtype": "$TEST_TRANSPORT", 00:30:19.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "$NVMF_PORT", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.370 "hdgst": ${hdgst:-false}, 00:30:19.370 "ddgst": ${ddgst:-false} 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 } 00:30:19.370 EOF 00:30:19.370 )") 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.370 [2024-06-07 14:32:42.958098] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:19.370 [2024-06-07 14:32:42.958151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684256 ] 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.370 { 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme$subsystem", 00:30:19.370 "trtype": "$TEST_TRANSPORT", 00:30:19.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "$NVMF_PORT", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.370 "hdgst": ${hdgst:-false}, 00:30:19.370 "ddgst": ${ddgst:-false} 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 } 00:30:19.370 EOF 00:30:19.370 )") 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.370 { 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme$subsystem", 00:30:19.370 "trtype": "$TEST_TRANSPORT", 00:30:19.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "$NVMF_PORT", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.370 "hdgst": ${hdgst:-false}, 00:30:19.370 "ddgst": ${ddgst:-false} 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 } 00:30:19.370 EOF 00:30:19.370 )") 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.370 { 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme$subsystem", 00:30:19.370 "trtype": "$TEST_TRANSPORT", 00:30:19.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "$NVMF_PORT", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.370 "hdgst": ${hdgst:-false}, 00:30:19.370 "ddgst": ${ddgst:-false} 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 } 00:30:19.370 EOF 00:30:19.370 )") 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.370 { 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme$subsystem", 00:30:19.370 "trtype": "$TEST_TRANSPORT", 00:30:19.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "$NVMF_PORT", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.370 "hdgst": ${hdgst:-false}, 00:30:19.370 "ddgst": ${ddgst:-false} 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 } 00:30:19.370 EOF 00:30:19.370 )") 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:30:19.370 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:30:19.370 14:32:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme1", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme2", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme3", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme4", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme5", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme6", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme7", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme8", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme9", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 },{ 00:30:19.370 "params": { 00:30:19.370 "name": "Nvme10", 00:30:19.370 "trtype": "tcp", 00:30:19.370 "traddr": "10.0.0.2", 00:30:19.370 "adrfam": "ipv4", 00:30:19.370 "trsvcid": "4420", 00:30:19.370 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:19.370 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:19.370 "hdgst": false, 00:30:19.370 "ddgst": false 00:30:19.370 }, 00:30:19.370 "method": "bdev_nvme_attach_controller" 00:30:19.370 }' 00:30:19.645 [2024-06-07 14:32:43.023575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.645 [2024-06-07 14:32:43.055376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.028 Running I/O for 10 seconds... 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:30:21.028 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.289 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.550 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:21.550 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:21.550 14:32:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 684256 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 684256 ']' 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 684256 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 684256 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 684256' 00:30:21.811 killing process with pid 684256 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 684256 00:30:21.811 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 684256 00:30:21.811 Received shutdown signal, test time was about 0.978669 seconds 00:30:21.811 00:30:21.811 Latency(us) 00:30:21.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme1n1 : 0.97 264.92 16.56 0.00 0.00 238700.59 14090.24 263891.63 00:30:21.811 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme2n1 : 0.98 262.06 16.38 0.00 0.00 236232.32 23483.73 274377.39 00:30:21.811 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme3n1 : 0.98 261.82 16.36 0.00 0.00 231191.57 12288.00 253405.87 00:30:21.811 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme4n1 : 0.97 263.93 16.50 0.00 0.00 225127.25 18677.76 246415.36 00:30:21.811 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme5n1 : 0.97 263.17 16.45 0.00 0.00 220965.55 18786.99 222822.40 00:30:21.811 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.811 Nvme6n1 : 0.95 202.20 12.64 0.00 0.00 280391.68 14417.92 249910.61 00:30:21.811 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.811 Verification LBA range: start 0x0 length 0x400 00:30:21.812 Nvme7n1 : 0.94 203.34 12.71 0.00 0.00 271805.16 20425.39 251658.24 00:30:21.812 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.812 Verification LBA range: start 0x0 length 0x400 00:30:21.812 Nvme8n1 : 0.96 266.92 16.68 0.00 0.00 202979.84 10758.83 246415.36 00:30:21.812 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.812 Verification LBA range: start 0x0 length 0x400 00:30:21.812 Nvme9n1 : 0.95 210.39 13.15 0.00 0.00 248121.00 4751.36 249910.61 00:30:21.812 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:21.812 Verification LBA range: start 0x0 length 0x400 00:30:21.812 Nvme10n1 : 0.96 199.71 12.48 0.00 0.00 258687.43 16274.77 265639.25 00:30:21.812 =================================================================================================================== 00:30:21.812 Total : 2398.45 149.90 0.00 0.00 238860.05 4751.36 274377.39 00:30:22.073 14:32:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:30:23.015 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 683941 00:30:23.015 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:30:23.015 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.016 rmmod nvme_tcp 00:30:23.016 rmmod nvme_fabrics 00:30:23.016 rmmod nvme_keyring 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 683941 ']' 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 683941 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 683941 ']' 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 683941 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:23.016 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 683941 00:30:23.276 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 683941' 00:30:23.277 killing process with pid 683941 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 683941 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 683941 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:23.277 14:32:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.823 14:32:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:25.823 00:30:25.823 real 0m7.829s 00:30:25.823 user 0m23.358s 00:30:25.823 sys 0m1.288s 00:30:25.823 14:32:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:25.823 14:32:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.823 ************************************ 00:30:25.823 END TEST nvmf_shutdown_tc2 00:30:25.823 ************************************ 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.823 ************************************ 00:30:25.823 START TEST nvmf_shutdown_tc3 00:30:25.823 ************************************ 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:25.823 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.823 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:25.824 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:25.824 Found net devices under 0000:31:00.0: cvl_0_0 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:25.824 Found net devices under 0000:31:00.1: cvl_0_1 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:25.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:30:25.824 00:30:25.824 --- 10.0.0.2 ping statistics --- 00:30:25.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.824 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:30:25.824 00:30:25.824 --- 10.0.0.1 ping statistics --- 00:30:25.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.824 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=685713 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 685713 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 685713 ']' 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:25.824 14:32:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.086 [2024-06-07 14:32:49.487826] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:26.086 [2024-06-07 14:32:49.487887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.086 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.086 [2024-06-07 14:32:49.581912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.086 [2024-06-07 14:32:49.616483] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.086 [2024-06-07 14:32:49.616520] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.086 [2024-06-07 14:32:49.616530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.086 [2024-06-07 14:32:49.616534] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.086 [2024-06-07 14:32:49.616538] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.086 [2024-06-07 14:32:49.616646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.086 [2024-06-07 14:32:49.616805] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.086 [2024-06-07 14:32:49.616951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.086 [2024-06-07 14:32:49.616953] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.658 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.658 [2024-06-07 14:32:50.301515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:26.921 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:26.921 Malloc1 00:30:26.921 [2024-06-07 14:32:50.400053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.921 Malloc2 00:30:26.921 Malloc3 00:30:26.921 Malloc4 00:30:26.921 Malloc5 00:30:26.921 Malloc6 00:30:27.182 Malloc7 00:30:27.182 Malloc8 00:30:27.182 Malloc9 00:30:27.182 Malloc10 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=685974 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 685974 /var/tmp/bdevperf.sock 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 685974 ']' 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:30:27.182 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.183 { 00:30:27.183 "params": { 00:30:27.183 "name": "Nvme$subsystem", 00:30:27.183 "trtype": "$TEST_TRANSPORT", 00:30:27.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.183 "adrfam": "ipv4", 00:30:27.183 "trsvcid": "$NVMF_PORT", 00:30:27.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.183 "hdgst": ${hdgst:-false}, 00:30:27.183 "ddgst": ${ddgst:-false} 00:30:27.183 }, 00:30:27.183 "method": "bdev_nvme_attach_controller" 00:30:27.183 } 00:30:27.183 EOF 00:30:27.183 )") 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.183 { 00:30:27.183 "params": { 00:30:27.183 "name": "Nvme$subsystem", 00:30:27.183 "trtype": "$TEST_TRANSPORT", 00:30:27.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.183 "adrfam": "ipv4", 00:30:27.183 "trsvcid": "$NVMF_PORT", 00:30:27.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.183 "hdgst": ${hdgst:-false}, 00:30:27.183 "ddgst": ${ddgst:-false} 00:30:27.183 }, 00:30:27.183 "method": "bdev_nvme_attach_controller" 00:30:27.183 } 00:30:27.183 EOF 00:30:27.183 )") 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.183 { 00:30:27.183 "params": { 00:30:27.183 "name": "Nvme$subsystem", 00:30:27.183 "trtype": "$TEST_TRANSPORT", 00:30:27.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.183 "adrfam": "ipv4", 00:30:27.183 "trsvcid": "$NVMF_PORT", 00:30:27.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.183 "hdgst": ${hdgst:-false}, 00:30:27.183 "ddgst": ${ddgst:-false} 00:30:27.183 }, 00:30:27.183 "method": "bdev_nvme_attach_controller" 00:30:27.183 } 00:30:27.183 EOF 00:30:27.183 )") 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.183 { 00:30:27.183 "params": { 00:30:27.183 "name": "Nvme$subsystem", 00:30:27.183 "trtype": "$TEST_TRANSPORT", 00:30:27.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.183 "adrfam": "ipv4", 00:30:27.183 "trsvcid": "$NVMF_PORT", 00:30:27.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.183 "hdgst": ${hdgst:-false}, 00:30:27.183 "ddgst": ${ddgst:-false} 00:30:27.183 }, 00:30:27.183 "method": "bdev_nvme_attach_controller" 00:30:27.183 } 00:30:27.183 EOF 00:30:27.183 )") 00:30:27.183 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 [2024-06-07 14:32:50.843248] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:27.444 [2024-06-07 14:32:50.843299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685974 ] 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.444 { 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme$subsystem", 00:30:27.444 "trtype": "$TEST_TRANSPORT", 00:30:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "$NVMF_PORT", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.444 "hdgst": ${hdgst:-false}, 00:30:27.444 "ddgst": ${ddgst:-false} 00:30:27.444 }, 00:30:27.444 "method": "bdev_nvme_attach_controller" 00:30:27.444 } 00:30:27.444 EOF 00:30:27.444 )") 00:30:27.444 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:30:27.444 14:32:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:27.444 "params": { 00:30:27.444 "name": "Nvme1", 00:30:27.444 "trtype": "tcp", 00:30:27.444 "traddr": "10.0.0.2", 00:30:27.444 "adrfam": "ipv4", 00:30:27.444 "trsvcid": "4420", 00:30:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.444 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme2", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme3", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme4", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme5", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme6", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme7", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme8", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme9", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 },{ 00:30:27.445 "params": { 00:30:27.445 "name": "Nvme10", 00:30:27.445 "trtype": "tcp", 00:30:27.445 "traddr": "10.0.0.2", 00:30:27.445 "adrfam": "ipv4", 00:30:27.445 "trsvcid": "4420", 00:30:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:27.445 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:27.445 "hdgst": false, 00:30:27.445 "ddgst": false 00:30:27.445 }, 00:30:27.445 "method": "bdev_nvme_attach_controller" 00:30:27.445 }' 00:30:27.445 [2024-06-07 14:32:50.908773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.445 [2024-06-07 14:32:50.940576] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.354 Running I/O for 10 seconds... 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:30:29.925 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 685713 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 685713 ']' 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 685713 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 685713 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 685713' 00:30:30.200 killing process with pid 685713 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 685713 00:30:30.200 14:32:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 685713 00:30:30.200 [2024-06-07 14:32:53.767761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767803] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767815] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767829] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767856] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767861] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767880] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767893] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767898] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767917] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767922] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767926] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767945] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.200 [2024-06-07 14:32:53.767951] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767961] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767979] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767983] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.767998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768003] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768008] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768040] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768048] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768058] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768077] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.768095] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463010 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769092] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769097] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769107] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769124] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769129] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769193] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769217] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769230] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769239] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769259] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769263] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769294] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769304] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769308] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.201 [2024-06-07 14:32:53.769322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.769326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.769331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.769335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.769340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24659f0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770240] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770245] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770263] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770272] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770276] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770281] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770355] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770417] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770426] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770435] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770444] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770449] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770453] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770462] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770479] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770493] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770502] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770507] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.770512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24634b0 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.771508] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.771532] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.771539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.202 [2024-06-07 14:32:53.771544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771562] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771568] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771573] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771577] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771587] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771601] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771605] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771610] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771614] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771634] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771638] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771654] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771669] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771674] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771678] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771687] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771710] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771738] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771742] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771751] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771760] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771764] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771774] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771788] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771797] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771801] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.771819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463950 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772936] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772955] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772960] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772980] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772984] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772989] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.772997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773006] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773012] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.203 [2024-06-07 14:32:53.773047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773052] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773062] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773071] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773076] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773094] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773104] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773118] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773127] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773131] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773135] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773153] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773159] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773164] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773182] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773186] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773198] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773212] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773234] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24642d0 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773848] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773862] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773867] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773871] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773896] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773905] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773909] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773914] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773918] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773924] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773933] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773939] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773949] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773954] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773959] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773968] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773973] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773978] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773983] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773988] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.773997] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774001] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774014] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774028] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774033] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774037] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.204 [2024-06-07 14:32:53.774041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774046] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774059] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774064] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774082] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774087] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774100] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774119] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774141] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.774145] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464770 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775149] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775154] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775165] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775170] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775205] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775210] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775251] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775255] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775260] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775293] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775298] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775302] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775317] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775336] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.205 [2024-06-07 14:32:53.775360] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775369] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775412] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775416] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.775421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.780860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.780894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.780904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.780911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.780920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.780927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.780935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.780942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.780949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284dfc0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.780978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.780988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.780997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ea30 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282f970 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781228] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ca0a0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2828450 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23efdb0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2848cc0 is same with the state(5) to be set 00:30:30.206 [2024-06-07 14:32:53.781500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.206 [2024-06-07 14:32:53.781524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.206 [2024-06-07 14:32:53.781533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ff80 is same with the state(5) to be set 00:30:30.207 [2024-06-07 14:32:53.781586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.207 [2024-06-07 14:32:53.781644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.781650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a11980 is same with the state(5) to be set 00:30:30.207 [2024-06-07 14:32:53.783184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.207 [2024-06-07 14:32:53.783754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.207 [2024-06-07 14:32:53.783761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.783984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.208 [2024-06-07 14:32:53.784344] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2999b30 was disconnected and freed. reset controller. 00:30:30.208 [2024-06-07 14:32:53.784456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.208 [2024-06-07 14:32:53.784580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.208 [2024-06-07 14:32:53.784587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.784985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.784994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.785001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.785010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.785016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.785505] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.785526] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.785533] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.785538] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.785544] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24650b0 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.786010] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465550 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.786025] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465550 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.786030] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465550 is same with the state(5) to be set 00:30:30.209 [2024-06-07 14:32:53.795207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.795240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.795252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.795260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.209 [2024-06-07 14:32:53.795270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.209 [2024-06-07 14:32:53.795278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.795749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.795814] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x299c080 was disconnected and freed. reset controller. 00:30:30.210 [2024-06-07 14:32:53.796274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.210 [2024-06-07 14:32:53.796493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.210 [2024-06-07 14:32:53.796501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.796988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.796998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.211 [2024-06-07 14:32:53.797186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.211 [2024-06-07 14:32:53.797193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.797416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.797993] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29a2370 was disconnected and freed. reset controller. 00:30:30.212 [2024-06-07 14:32:53.798061] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284dfc0 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ea30 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282f970 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ca0a0 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2828450 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efdb0 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2848cc0 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ff80 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a11980 (9): Bad file descriptor 00:30:30.212 [2024-06-07 14:32:53.798225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.212 [2024-06-07 14:32:53.798235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.798244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.212 [2024-06-07 14:32:53.798252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.798260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.212 [2024-06-07 14:32:53.798267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.798275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.212 [2024-06-07 14:32:53.798283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.798290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a13b10 is same with the state(5) to be set 00:30:30.212 [2024-06-07 14:32:53.800932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.800953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.800968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.800976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.800994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.212 [2024-06-07 14:32:53.801288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.212 [2024-06-07 14:32:53.801295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.213 [2024-06-07 14:32:53.801999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.213 [2024-06-07 14:32:53.802007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.802023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.802040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.802058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.802076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.802093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.802144] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29a0e60 was disconnected and freed. reset controller. 00:30:30.214 [2024-06-07 14:32:53.803369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:30.214 [2024-06-07 14:32:53.804877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:30.214 [2024-06-07 14:32:53.805135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-06-07 14:32:53.805158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282ea30 with addr=10.0.0.2, port=4420 00:30:30.214 [2024-06-07 14:32:53.805168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ea30 is same with the state(5) to be set 00:30:30.214 [2024-06-07 14:32:53.806193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:30.214 [2024-06-07 14:32:53.806219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:30.214 [2024-06-07 14:32:53.806232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a13b10 (9): Bad file descriptor 00:30:30.214 [2024-06-07 14:32:53.806575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-06-07 14:32:53.806617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282ff80 with addr=10.0.0.2, port=4420 00:30:30.214 [2024-06-07 14:32:53.806629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ff80 is same with the state(5) to be set 00:30:30.214 [2024-06-07 14:32:53.806645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ea30 (9): Bad file descriptor 00:30:30.214 [2024-06-07 14:32:53.806698] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.806741] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.806779] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.806832] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.806870] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.806916] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:30.214 [2024-06-07 14:32:53.807447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.214 [2024-06-07 14:32:53.807466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ca0a0 with addr=10.0.0.2, port=4420 00:30:30.214 [2024-06-07 14:32:53.807475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ca0a0 is same with the state(5) to be set 00:30:30.214 [2024-06-07 14:32:53.807496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ff80 (9): Bad file descriptor 00:30:30.214 [2024-06-07 14:32:53.807507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:30.214 [2024-06-07 14:32:53.807514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:30.214 [2024-06-07 14:32:53.807523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:30.214 [2024-06-07 14:32:53.807564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.214 [2024-06-07 14:32:53.807889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.214 [2024-06-07 14:32:53.807896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.807907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.807917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.807928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.807936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.807946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.807955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.807965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.807973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.807984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.807992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.215 [2024-06-07 14:32:53.808649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.215 [2024-06-07 14:32:53.808658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.808772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f2f30 is same with the state(5) to be set 00:30:30.216 [2024-06-07 14:32:53.808829] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23f2f30 was disconnected and freed. reset controller. 00:30:30.216 [2024-06-07 14:32:53.808905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.216 [2024-06-07 14:32:53.809077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.216 [2024-06-07 14:32:53.809090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a13b10 with addr=10.0.0.2, port=4420 00:30:30.216 [2024-06-07 14:32:53.809099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a13b10 is same with the state(5) to be set 00:30:30.216 [2024-06-07 14:32:53.809112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ca0a0 (9): Bad file descriptor 00:30:30.216 [2024-06-07 14:32:53.809121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:30.216 [2024-06-07 14:32:53.809128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:30.216 [2024-06-07 14:32:53.809136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:30.216 [2024-06-07 14:32:53.809178] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.216 [2024-06-07 14:32:53.810443] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.216 [2024-06-07 14:32:53.810475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:30:30.216 [2024-06-07 14:32:53.810495] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a13b10 (9): Bad file descriptor 00:30:30.216 [2024-06-07 14:32:53.810504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:30.216 [2024-06-07 14:32:53.810510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:30.216 [2024-06-07 14:32:53.810518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:30.216 [2024-06-07 14:32:53.810554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.810989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.810997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.216 [2024-06-07 14:32:53.811008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.216 [2024-06-07 14:32:53.811016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.811698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.811707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29a8d90 is same with the state(5) to be set 00:30:30.217 [2024-06-07 14:32:53.812981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.217 [2024-06-07 14:32:53.812995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.217 [2024-06-07 14:32:53.813006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.218 [2024-06-07 14:32:53.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.218 [2024-06-07 14:32:53.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.813981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.813989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.814149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.814158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f4400 is same with the state(5) to be set 00:30:30.219 [2024-06-07 14:32:53.815441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-07 14:32:53.815621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.219 [2024-06-07 14:32:53.815632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.815990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.815997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.220 [2024-06-07 14:32:53.816368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.220 [2024-06-07 14:32:53.816378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.816620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.816629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29535e0 is same with the state(5) to be set 00:30:30.221 [2024-06-07 14:32:53.817885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.817900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.817911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.817920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.817931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.817942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.817952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.817961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.817972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.817992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.221 [2024-06-07 14:32:53.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.221 [2024-06-07 14:32:53.818369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.818981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.818992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.819000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.819009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.819017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.819027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.819035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.819045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.819053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.222 [2024-06-07 14:32:53.819061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x299ab60 is same with the state(5) to be set 00:30:30.222 [2024-06-07 14:32:53.820337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.222 [2024-06-07 14:32:53.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.223 [2024-06-07 14:32:53.820920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.223 [2024-06-07 14:32:53.820928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.820938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.820946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.820957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.820965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.820976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.820983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.820995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.224 [2024-06-07 14:32:53.821497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.224 [2024-06-07 14:32:53.821506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32d0d10 is same with the state(5) to be set 00:30:30.224 [2024-06-07 14:32:53.822980] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.224 [2024-06-07 14:32:53.823001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.224 [2024-06-07 14:32:53.823012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:30:30.224 [2024-06-07 14:32:53.823022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:30:30.224 [2024-06-07 14:32:53.823441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.224 [2024-06-07 14:32:53.823457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2828450 with addr=10.0.0.2, port=4420 00:30:30.224 [2024-06-07 14:32:53.823465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2828450 is same with the state(5) to be set 00:30:30.224 [2024-06-07 14:32:53.823474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:30.224 [2024-06-07 14:32:53.823481] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:30.224 [2024-06-07 14:32:53.823489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:30.224 [2024-06-07 14:32:53.823533] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.224 [2024-06-07 14:32:53.823552] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.224 [2024-06-07 14:32:53.823562] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.224 [2024-06-07 14:32:53.823575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2828450 (9): Bad file descriptor 00:30:30.224 [2024-06-07 14:32:53.823897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:30:30.486 task offset: 36352 on job bdev=Nvme4n1 fails 00:30:30.486 00:30:30.486 Latency(us) 00:30:30.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.486 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme1n1 ended in about 1.13 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme1n1 : 1.13 174.01 10.88 56.82 0.00 274559.29 19770.03 249910.61 00:30:30.486 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme2n1 ended in about 1.12 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme2n1 : 1.12 178.85 11.18 56.95 0.00 264021.87 17803.95 267386.88 00:30:30.486 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme3n1 ended in about 1.13 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme3n1 : 1.13 170.09 10.63 56.70 0.00 269729.71 20316.16 249910.61 00:30:30.486 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme4n1 ended in about 1.11 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme4n1 : 1.11 229.99 14.37 57.50 0.00 208698.88 15510.19 246415.36 00:30:30.486 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme5n1 ended in about 1.13 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme5n1 : 1.13 169.72 10.61 56.57 0.00 260587.73 17039.36 228939.09 00:30:30.486 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme6n1 ended in about 1.13 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme6n1 : 1.13 169.36 10.58 56.45 0.00 256435.63 18131.63 267386.88 00:30:30.486 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme7n1 ended in about 1.11 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme7n1 : 1.11 229.73 14.36 57.43 0.00 197336.83 11468.80 256901.12 00:30:30.486 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme8n1 ended in about 1.14 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme8n1 : 1.14 168.99 10.56 56.33 0.00 247317.76 15400.96 253405.87 00:30:30.486 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme9n1 ended in about 1.12 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme9n1 : 1.12 171.69 10.73 57.23 0.00 238006.61 7700.48 270882.13 00:30:30.486 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.486 Job: Nvme10n1 ended in about 1.12 seconds with error 00:30:30.486 Verification LBA range: start 0x0 length 0x400 00:30:30.486 Nvme10n1 : 1.12 171.92 10.74 57.31 0.00 232778.88 16930.13 263891.63 00:30:30.486 =================================================================================================================== 00:30:30.486 Total : 1834.33 114.65 569.28 0.00 243067.70 7700.48 270882.13 00:30:30.486 [2024-06-07 14:32:53.848564] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:30.486 [2024-06-07 14:32:53.848596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:30:30.486 [2024-06-07 14:32:53.848609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.849019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.849036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23efdb0 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.849046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23efdb0 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.849135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.849147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282f970 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.849154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282f970 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.849462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.849474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x284dfc0 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.849482] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284dfc0 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.850806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:30:30.487 [2024-06-07 14:32:53.850820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:30:30.487 [2024-06-07 14:32:53.850830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:30:30.487 [2024-06-07 14:32:53.851221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.851235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2848cc0 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.851242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2848cc0 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.851574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.851586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a11980 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.851594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a11980 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.851605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efdb0 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.851616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282f970 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.851625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x284dfc0 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.851634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.851641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.851649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:30:30.487 [2024-06-07 14:32:53.851690] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.487 [2024-06-07 14:32:53.851704] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.487 [2024-06-07 14:32:53.851716] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.487 [2024-06-07 14:32:53.851728] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:30:30.487 [2024-06-07 14:32:53.851805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.852083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.852096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282ea30 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.852104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ea30 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.852413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.852425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x282ff80 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.852432] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x282ff80 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.852771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.852782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ca0a0 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.852790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ca0a0 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.852799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2848cc0 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.852808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a11980 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.852818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.852825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.852832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.487 [2024-06-07 14:32:53.852842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.852852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.852860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:30:30.487 [2024-06-07 14:32:53.852871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.852878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.852885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:30:30.487 [2024-06-07 14:32:53.852949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:30:30.487 [2024-06-07 14:32:53.852960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.852967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.852973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.852986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ea30 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.852996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x282ff80 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.853005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ca0a0 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.853015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.853086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.853432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.487 [2024-06-07 14:32:53.853444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a13b10 with addr=10.0.0.2, port=4420 00:30:30.487 [2024-06-07 14:32:53.853453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a13b10 is same with the state(5) to be set 00:30:30.487 [2024-06-07 14:32:53.853460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853483] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853497] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.853560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.853566] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 [2024-06-07 14:32:53.853574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a13b10 (9): Bad file descriptor 00:30:30.487 [2024-06-07 14:32:53.853602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:30:30.487 [2024-06-07 14:32:53.853611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:30:30.487 [2024-06-07 14:32:53.853618] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:30:30.487 [2024-06-07 14:32:53.853647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:30.487 14:32:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:30:30.487 14:32:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:30:31.429 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 685974 00:30:31.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (685974) - No such process 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:31.430 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:31.430 rmmod nvme_tcp 00:30:31.690 rmmod nvme_fabrics 00:30:31.690 rmmod nvme_keyring 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.690 14:32:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:33.603 00:30:33.603 real 0m8.136s 00:30:33.603 user 0m20.822s 00:30:33.603 sys 0m1.290s 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 ************************************ 00:30:33.603 END TEST nvmf_shutdown_tc3 00:30:33.603 ************************************ 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:30:33.603 00:30:33.603 real 0m33.445s 00:30:33.603 user 1m17.580s 00:30:33.603 sys 0m9.907s 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:33.603 14:32:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:33.603 ************************************ 00:30:33.603 END TEST nvmf_shutdown 00:30:33.603 ************************************ 00:30:33.865 14:32:57 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.865 14:32:57 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.865 14:32:57 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:30:33.865 14:32:57 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:33.865 14:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.865 ************************************ 00:30:33.865 START TEST nvmf_multicontroller 00:30:33.865 ************************************ 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:33.865 * Looking for test storage... 00:30:33.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.865 14:32:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:30:33.866 14:32:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:42.013 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:42.013 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:42.013 Found net devices under 0000:31:00.0: cvl_0_0 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:42.013 Found net devices under 0000:31:00.1: cvl_0_1 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.013 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:30:42.275 00:30:42.275 --- 10.0.0.2 ping statistics --- 00:30:42.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.275 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:42.275 00:30:42.275 --- 10.0.0.1 ping statistics --- 00:30:42.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.275 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=691614 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 691614 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 691614 ']' 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:42.275 14:33:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:42.537 [2024-06-07 14:33:05.946920] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:42.537 [2024-06-07 14:33:05.946971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.537 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.537 [2024-06-07 14:33:06.038704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:42.537 [2024-06-07 14:33:06.080059] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.537 [2024-06-07 14:33:06.080111] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.537 [2024-06-07 14:33:06.080119] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.537 [2024-06-07 14:33:06.080126] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.537 [2024-06-07 14:33:06.080132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.537 [2024-06-07 14:33:06.080206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.537 [2024-06-07 14:33:06.080419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.537 [2024-06-07 14:33:06.080512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.109 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:43.109 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:30:43.109 14:33:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.109 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:43.109 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 [2024-06-07 14:33:06.769866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 Malloc0 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 [2024-06-07 14:33:06.829487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 [2024-06-07 14:33:06.841457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 Malloc1 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.370 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=691799 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 691799 /var/tmp/bdevperf.sock 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 691799 ']' 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:43.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:43.371 14:33:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.314 NVMe0n1 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.314 1 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.314 request: 00:30:44.314 { 00:30:44.314 "name": "NVMe0", 00:30:44.314 "trtype": "tcp", 00:30:44.314 "traddr": "10.0.0.2", 00:30:44.314 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:44.314 "hostaddr": "10.0.0.2", 00:30:44.314 "hostsvcid": "60000", 00:30:44.314 "adrfam": "ipv4", 00:30:44.314 "trsvcid": "4420", 00:30:44.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.314 "method": "bdev_nvme_attach_controller", 00:30:44.314 "req_id": 1 00:30:44.314 } 00:30:44.314 Got JSON-RPC error response 00:30:44.314 response: 00:30:44.314 { 00:30:44.314 "code": -114, 00:30:44.314 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:44.314 } 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.314 request: 00:30:44.314 { 00:30:44.314 "name": "NVMe0", 00:30:44.314 "trtype": "tcp", 00:30:44.314 "traddr": "10.0.0.2", 00:30:44.314 "hostaddr": "10.0.0.2", 00:30:44.314 "hostsvcid": "60000", 00:30:44.314 "adrfam": "ipv4", 00:30:44.314 "trsvcid": "4420", 00:30:44.314 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:44.314 "method": "bdev_nvme_attach_controller", 00:30:44.314 "req_id": 1 00:30:44.314 } 00:30:44.314 Got JSON-RPC error response 00:30:44.314 response: 00:30:44.314 { 00:30:44.314 "code": -114, 00:30:44.314 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:44.314 } 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:44.314 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.315 request: 00:30:44.315 { 00:30:44.315 "name": "NVMe0", 00:30:44.315 "trtype": "tcp", 00:30:44.315 "traddr": "10.0.0.2", 00:30:44.315 "hostaddr": "10.0.0.2", 00:30:44.315 "hostsvcid": "60000", 00:30:44.315 "adrfam": "ipv4", 00:30:44.315 "trsvcid": "4420", 00:30:44.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.315 "multipath": "disable", 00:30:44.315 "method": "bdev_nvme_attach_controller", 00:30:44.315 "req_id": 1 00:30:44.315 } 00:30:44.315 Got JSON-RPC error response 00:30:44.315 response: 00:30:44.315 { 00:30:44.315 "code": -114, 00:30:44.315 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:44.315 } 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.315 request: 00:30:44.315 { 00:30:44.315 "name": "NVMe0", 00:30:44.315 "trtype": "tcp", 00:30:44.315 "traddr": "10.0.0.2", 00:30:44.315 "hostaddr": "10.0.0.2", 00:30:44.315 "hostsvcid": "60000", 00:30:44.315 "adrfam": "ipv4", 00:30:44.315 "trsvcid": "4420", 00:30:44.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.315 "multipath": "failover", 00:30:44.315 "method": "bdev_nvme_attach_controller", 00:30:44.315 "req_id": 1 00:30:44.315 } 00:30:44.315 Got JSON-RPC error response 00:30:44.315 response: 00:30:44.315 { 00:30:44.315 "code": -114, 00:30:44.315 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:44.315 } 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.315 14:33:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.575 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.575 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.837 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:44.837 14:33:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:45.777 0 00:30:45.777 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:45.777 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 691799 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 691799 ']' 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 691799 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:45.778 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 691799 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 691799' 00:30:46.039 killing process with pid 691799 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 691799 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 691799 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:30:46.039 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:46.039 [2024-06-07 14:33:06.958222] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:46.039 [2024-06-07 14:33:06.958277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691799 ] 00:30:46.039 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.039 [2024-06-07 14:33:07.022605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.039 [2024-06-07 14:33:07.054232] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.039 [2024-06-07 14:33:08.258436] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 604a052d-7666-4aea-be75-b45f758739ce already exists 00:30:46.039 [2024-06-07 14:33:08.258466] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:604a052d-7666-4aea-be75-b45f758739ce alias for bdev NVMe1n1 00:30:46.039 [2024-06-07 14:33:08.258476] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:46.039 Running I/O for 1 seconds... 00:30:46.039 00:30:46.039 Latency(us) 00:30:46.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.039 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:46.039 NVMe0n1 : 1.00 27491.39 107.39 0.00 0.00 4645.35 2075.31 9338.88 00:30:46.039 =================================================================================================================== 00:30:46.039 Total : 27491.39 107.39 0.00 0.00 4645.35 2075.31 9338.88 00:30:46.039 Received shutdown signal, test time was about 1.000000 seconds 00:30:46.039 00:30:46.039 Latency(us) 00:30:46.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.039 =================================================================================================================== 00:30:46.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:46.039 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:46.039 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:46.039 rmmod nvme_tcp 00:30:46.039 rmmod nvme_fabrics 00:30:46.039 rmmod nvme_keyring 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 691614 ']' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 691614 ']' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 691614' 00:30:46.300 killing process with pid 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 691614 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:46.300 14:33:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.840 14:33:11 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:48.840 00:30:48.840 real 0m14.586s 00:30:48.840 user 0m16.722s 00:30:48.840 sys 0m6.922s 00:30:48.840 14:33:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:48.841 14:33:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:48.841 ************************************ 00:30:48.841 END TEST nvmf_multicontroller 00:30:48.841 ************************************ 00:30:48.841 14:33:11 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:48.841 14:33:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:48.841 14:33:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:48.841 14:33:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:48.841 ************************************ 00:30:48.841 START TEST nvmf_aer 00:30:48.841 ************************************ 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:48.841 * Looking for test storage... 00:30:48.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.841 14:33:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:56.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:56.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:56.979 Found net devices under 0000:31:00.0: cvl_0_0 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.979 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:56.980 Found net devices under 0000:31:00.1: cvl_0_1 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.980 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:56.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:30:56.980 00:30:56.980 --- 10.0.0.2 ping statistics --- 00:30:56.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.980 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:30:56.980 00:30:56.980 --- 10.0.0.1 ping statistics --- 00:30:56.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.980 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=697445 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 697445 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 697445 ']' 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:56.980 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:56.980 [2024-06-07 14:33:20.225937] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:30:56.980 [2024-06-07 14:33:20.226004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.980 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.980 [2024-06-07 14:33:20.305392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.980 [2024-06-07 14:33:20.346183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.980 [2024-06-07 14:33:20.346231] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.980 [2024-06-07 14:33:20.346241] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.980 [2024-06-07 14:33:20.346247] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.980 [2024-06-07 14:33:20.346253] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.980 [2024-06-07 14:33:20.346328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.980 [2024-06-07 14:33:20.346447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.980 [2024-06-07 14:33:20.346605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.980 [2024-06-07 14:33:20.346605] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 [2024-06-07 14:33:21.058884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 Malloc0 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 [2024-06-07 14:33:21.115724] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:57.551 [ 00:30:57.551 { 00:30:57.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:57.551 "subtype": "Discovery", 00:30:57.551 "listen_addresses": [], 00:30:57.551 "allow_any_host": true, 00:30:57.551 "hosts": [] 00:30:57.551 }, 00:30:57.551 { 00:30:57.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:57.551 "subtype": "NVMe", 00:30:57.551 "listen_addresses": [ 00:30:57.551 { 00:30:57.551 "trtype": "TCP", 00:30:57.551 "adrfam": "IPv4", 00:30:57.551 "traddr": "10.0.0.2", 00:30:57.551 "trsvcid": "4420" 00:30:57.551 } 00:30:57.551 ], 00:30:57.551 "allow_any_host": true, 00:30:57.551 "hosts": [], 00:30:57.551 "serial_number": "SPDK00000000000001", 00:30:57.551 "model_number": "SPDK bdev Controller", 00:30:57.551 "max_namespaces": 2, 00:30:57.551 "min_cntlid": 1, 00:30:57.551 "max_cntlid": 65519, 00:30:57.551 "namespaces": [ 00:30:57.551 { 00:30:57.551 "nsid": 1, 00:30:57.551 "bdev_name": "Malloc0", 00:30:57.551 "name": "Malloc0", 00:30:57.551 "nguid": "80A288E9EDA94F33BB405FCF90604E69", 00:30:57.551 "uuid": "80a288e9-eda9-4f33-bb40-5fcf90604e69" 00:30:57.551 } 00:30:57.551 ] 00:30:57.551 } 00:30:57.551 ] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=697504 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:30:57.551 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:30:57.551 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 2 -lt 200 ']' 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=3 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:57.812 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 3 -lt 200 ']' 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=4 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 Malloc1 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 Asynchronous Event Request test 00:30:58.073 Attaching to 10.0.0.2 00:30:58.073 Attached to 10.0.0.2 00:30:58.073 Registering asynchronous event callbacks... 00:30:58.073 Starting namespace attribute notice tests for all controllers... 00:30:58.073 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:58.073 aer_cb - Changed Namespace 00:30:58.073 Cleaning up... 00:30:58.073 [ 00:30:58.073 { 00:30:58.073 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:58.073 "subtype": "Discovery", 00:30:58.073 "listen_addresses": [], 00:30:58.073 "allow_any_host": true, 00:30:58.073 "hosts": [] 00:30:58.073 }, 00:30:58.073 { 00:30:58.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.073 "subtype": "NVMe", 00:30:58.073 "listen_addresses": [ 00:30:58.073 { 00:30:58.073 "trtype": "TCP", 00:30:58.073 "adrfam": "IPv4", 00:30:58.073 "traddr": "10.0.0.2", 00:30:58.073 "trsvcid": "4420" 00:30:58.073 } 00:30:58.073 ], 00:30:58.073 "allow_any_host": true, 00:30:58.073 "hosts": [], 00:30:58.073 "serial_number": "SPDK00000000000001", 00:30:58.073 "model_number": "SPDK bdev Controller", 00:30:58.073 "max_namespaces": 2, 00:30:58.073 "min_cntlid": 1, 00:30:58.073 "max_cntlid": 65519, 00:30:58.073 "namespaces": [ 00:30:58.073 { 00:30:58.073 "nsid": 1, 00:30:58.073 "bdev_name": "Malloc0", 00:30:58.073 "name": "Malloc0", 00:30:58.073 "nguid": "80A288E9EDA94F33BB405FCF90604E69", 00:30:58.073 "uuid": "80a288e9-eda9-4f33-bb40-5fcf90604e69" 00:30:58.073 }, 00:30:58.073 { 00:30:58.073 "nsid": 2, 00:30:58.073 "bdev_name": "Malloc1", 00:30:58.073 "name": "Malloc1", 00:30:58.073 "nguid": "EC967699BA074888BB5032F85EAD6D8B", 00:30:58.073 "uuid": "ec967699-ba07-4888-bb50-32f85ead6d8b" 00:30:58.073 } 00:30:58.073 ] 00:30:58.073 } 00:30:58.073 ] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 697504 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.073 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:58.073 rmmod nvme_tcp 00:30:58.073 rmmod nvme_fabrics 00:30:58.073 rmmod nvme_keyring 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 697445 ']' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 697445 ']' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 697445' 00:30:58.334 killing process with pid 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 697445 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:58.334 14:33:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.879 14:33:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:00.879 00:31:00.879 real 0m11.955s 00:31:00.879 user 0m8.572s 00:31:00.879 sys 0m6.384s 00:31:00.879 14:33:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:00.879 14:33:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:00.879 ************************************ 00:31:00.879 END TEST nvmf_aer 00:31:00.879 ************************************ 00:31:00.879 14:33:24 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:00.879 14:33:24 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:00.879 14:33:24 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:00.879 14:33:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.879 ************************************ 00:31:00.879 START TEST nvmf_async_init 00:31:00.879 ************************************ 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:00.879 * Looking for test storage... 00:31:00.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f2a5c0c17010478b8cbfa6e2811273b4 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:00.879 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:31:00.880 14:33:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:09.100 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:09.100 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:09.100 Found net devices under 0000:31:00.0: cvl_0_0 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:09.100 Found net devices under 0000:31:00.1: cvl_0_1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:09.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:09.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:31:09.100 00:31:09.100 --- 10.0.0.2 ping statistics --- 00:31:09.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.100 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:09.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:09.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:31:09.100 00:31:09.100 --- 10.0.0.1 ping statistics --- 00:31:09.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:09.100 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=702383 00:31:09.100 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 702383 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 702383 ']' 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:09.101 14:33:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.101 [2024-06-07 14:33:32.515052] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:31:09.101 [2024-06-07 14:33:32.515114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.101 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.101 [2024-06-07 14:33:32.595547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.101 [2024-06-07 14:33:32.634530] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.101 [2024-06-07 14:33:32.634582] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.101 [2024-06-07 14:33:32.634590] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.101 [2024-06-07 14:33:32.634597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.101 [2024-06-07 14:33:32.634603] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.101 [2024-06-07 14:33:32.634636] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.670 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:09.670 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:31:09.670 14:33:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.670 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:09.670 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.930 14:33:33 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.930 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 [2024-06-07 14:33:33.346000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 null0 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f2a5c0c17010478b8cbfa6e2811273b4 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:09.931 [2024-06-07 14:33:33.402290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:09.931 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.192 nvme0n1 00:31:10.192 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.192 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.192 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.192 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.192 [ 00:31:10.192 { 00:31:10.192 "name": "nvme0n1", 00:31:10.192 "aliases": [ 00:31:10.192 "f2a5c0c1-7010-478b-8cbf-a6e2811273b4" 00:31:10.192 ], 00:31:10.192 "product_name": "NVMe disk", 00:31:10.192 "block_size": 512, 00:31:10.192 "num_blocks": 2097152, 00:31:10.192 "uuid": "f2a5c0c1-7010-478b-8cbf-a6e2811273b4", 00:31:10.192 "assigned_rate_limits": { 00:31:10.192 "rw_ios_per_sec": 0, 00:31:10.192 "rw_mbytes_per_sec": 0, 00:31:10.192 "r_mbytes_per_sec": 0, 00:31:10.192 "w_mbytes_per_sec": 0 00:31:10.192 }, 00:31:10.192 "claimed": false, 00:31:10.192 "zoned": false, 00:31:10.192 "supported_io_types": { 00:31:10.192 "read": true, 00:31:10.192 "write": true, 00:31:10.192 "unmap": false, 00:31:10.192 "write_zeroes": true, 00:31:10.192 "flush": true, 00:31:10.192 "reset": true, 00:31:10.192 "compare": true, 00:31:10.192 "compare_and_write": true, 00:31:10.192 "abort": true, 00:31:10.192 "nvme_admin": true, 00:31:10.192 "nvme_io": true 00:31:10.192 }, 00:31:10.192 "memory_domains": [ 00:31:10.192 { 00:31:10.192 "dma_device_id": "system", 00:31:10.192 "dma_device_type": 1 00:31:10.192 } 00:31:10.192 ], 00:31:10.192 "driver_specific": { 00:31:10.192 "nvme": [ 00:31:10.192 { 00:31:10.192 "trid": { 00:31:10.192 "trtype": "TCP", 00:31:10.192 "adrfam": "IPv4", 00:31:10.192 "traddr": "10.0.0.2", 00:31:10.193 "trsvcid": "4420", 00:31:10.193 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.193 }, 00:31:10.193 "ctrlr_data": { 00:31:10.193 "cntlid": 1, 00:31:10.193 "vendor_id": "0x8086", 00:31:10.193 "model_number": "SPDK bdev Controller", 00:31:10.193 "serial_number": "00000000000000000000", 00:31:10.193 "firmware_revision": "24.09", 00:31:10.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.193 "oacs": { 00:31:10.193 "security": 0, 00:31:10.193 "format": 0, 00:31:10.193 "firmware": 0, 00:31:10.193 "ns_manage": 0 00:31:10.193 }, 00:31:10.193 "multi_ctrlr": true, 00:31:10.193 "ana_reporting": false 00:31:10.193 }, 00:31:10.193 "vs": { 00:31:10.193 "nvme_version": "1.3" 00:31:10.193 }, 00:31:10.193 "ns_data": { 00:31:10.193 "id": 1, 00:31:10.193 "can_share": true 00:31:10.193 } 00:31:10.193 } 00:31:10.193 ], 00:31:10.193 "mp_policy": "active_passive" 00:31:10.193 } 00:31:10.193 } 00:31:10.193 ] 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.193 [2024-06-07 14:33:33.666817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:10.193 [2024-06-07 14:33:33.666877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31de0 (9): Bad file descriptor 00:31:10.193 [2024-06-07 14:33:33.811294] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.193 [ 00:31:10.193 { 00:31:10.193 "name": "nvme0n1", 00:31:10.193 "aliases": [ 00:31:10.193 "f2a5c0c1-7010-478b-8cbf-a6e2811273b4" 00:31:10.193 ], 00:31:10.193 "product_name": "NVMe disk", 00:31:10.193 "block_size": 512, 00:31:10.193 "num_blocks": 2097152, 00:31:10.193 "uuid": "f2a5c0c1-7010-478b-8cbf-a6e2811273b4", 00:31:10.193 "assigned_rate_limits": { 00:31:10.193 "rw_ios_per_sec": 0, 00:31:10.193 "rw_mbytes_per_sec": 0, 00:31:10.193 "r_mbytes_per_sec": 0, 00:31:10.193 "w_mbytes_per_sec": 0 00:31:10.193 }, 00:31:10.193 "claimed": false, 00:31:10.193 "zoned": false, 00:31:10.193 "supported_io_types": { 00:31:10.193 "read": true, 00:31:10.193 "write": true, 00:31:10.193 "unmap": false, 00:31:10.193 "write_zeroes": true, 00:31:10.193 "flush": true, 00:31:10.193 "reset": true, 00:31:10.193 "compare": true, 00:31:10.193 "compare_and_write": true, 00:31:10.193 "abort": true, 00:31:10.193 "nvme_admin": true, 00:31:10.193 "nvme_io": true 00:31:10.193 }, 00:31:10.193 "memory_domains": [ 00:31:10.193 { 00:31:10.193 "dma_device_id": "system", 00:31:10.193 "dma_device_type": 1 00:31:10.193 } 00:31:10.193 ], 00:31:10.193 "driver_specific": { 00:31:10.193 "nvme": [ 00:31:10.193 { 00:31:10.193 "trid": { 00:31:10.193 "trtype": "TCP", 00:31:10.193 "adrfam": "IPv4", 00:31:10.193 "traddr": "10.0.0.2", 00:31:10.193 "trsvcid": "4420", 00:31:10.193 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.193 }, 00:31:10.193 "ctrlr_data": { 00:31:10.193 "cntlid": 2, 00:31:10.193 "vendor_id": "0x8086", 00:31:10.193 "model_number": "SPDK bdev Controller", 00:31:10.193 "serial_number": "00000000000000000000", 00:31:10.193 "firmware_revision": "24.09", 00:31:10.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.193 "oacs": { 00:31:10.193 "security": 0, 00:31:10.193 "format": 0, 00:31:10.193 "firmware": 0, 00:31:10.193 "ns_manage": 0 00:31:10.193 }, 00:31:10.193 "multi_ctrlr": true, 00:31:10.193 "ana_reporting": false 00:31:10.193 }, 00:31:10.193 "vs": { 00:31:10.193 "nvme_version": "1.3" 00:31:10.193 }, 00:31:10.193 "ns_data": { 00:31:10.193 "id": 1, 00:31:10.193 "can_share": true 00:31:10.193 } 00:31:10.193 } 00:31:10.193 ], 00:31:10.193 "mp_policy": "active_passive" 00:31:10.193 } 00:31:10.193 } 00:31:10.193 ] 00:31:10.193 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NpfLnNRClG 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NpfLnNRClG 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 [2024-06-07 14:33:33.883531] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:10.455 [2024-06-07 14:33:33.883650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NpfLnNRClG 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 [2024-06-07 14:33:33.895555] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NpfLnNRClG 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 [2024-06-07 14:33:33.907588] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:10.455 [2024-06-07 14:33:33.907625] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:31:10.455 nvme0n1 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.455 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.455 [ 00:31:10.455 { 00:31:10.455 "name": "nvme0n1", 00:31:10.455 "aliases": [ 00:31:10.455 "f2a5c0c1-7010-478b-8cbf-a6e2811273b4" 00:31:10.455 ], 00:31:10.455 "product_name": "NVMe disk", 00:31:10.455 "block_size": 512, 00:31:10.455 "num_blocks": 2097152, 00:31:10.455 "uuid": "f2a5c0c1-7010-478b-8cbf-a6e2811273b4", 00:31:10.455 "assigned_rate_limits": { 00:31:10.455 "rw_ios_per_sec": 0, 00:31:10.455 "rw_mbytes_per_sec": 0, 00:31:10.455 "r_mbytes_per_sec": 0, 00:31:10.455 "w_mbytes_per_sec": 0 00:31:10.455 }, 00:31:10.455 "claimed": false, 00:31:10.455 "zoned": false, 00:31:10.455 "supported_io_types": { 00:31:10.455 "read": true, 00:31:10.455 "write": true, 00:31:10.455 "unmap": false, 00:31:10.455 "write_zeroes": true, 00:31:10.455 "flush": true, 00:31:10.455 "reset": true, 00:31:10.455 "compare": true, 00:31:10.455 "compare_and_write": true, 00:31:10.455 "abort": true, 00:31:10.455 "nvme_admin": true, 00:31:10.455 "nvme_io": true 00:31:10.455 }, 00:31:10.455 "memory_domains": [ 00:31:10.455 { 00:31:10.455 "dma_device_id": "system", 00:31:10.455 "dma_device_type": 1 00:31:10.455 } 00:31:10.455 ], 00:31:10.455 "driver_specific": { 00:31:10.455 "nvme": [ 00:31:10.455 { 00:31:10.455 "trid": { 00:31:10.455 "trtype": "TCP", 00:31:10.455 "adrfam": "IPv4", 00:31:10.455 "traddr": "10.0.0.2", 00:31:10.455 "trsvcid": "4421", 00:31:10.455 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:10.455 }, 00:31:10.455 "ctrlr_data": { 00:31:10.455 "cntlid": 3, 00:31:10.455 "vendor_id": "0x8086", 00:31:10.455 "model_number": "SPDK bdev Controller", 00:31:10.455 "serial_number": "00000000000000000000", 00:31:10.455 "firmware_revision": "24.09", 00:31:10.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.455 "oacs": { 00:31:10.455 "security": 0, 00:31:10.455 "format": 0, 00:31:10.455 "firmware": 0, 00:31:10.455 "ns_manage": 0 00:31:10.455 }, 00:31:10.455 "multi_ctrlr": true, 00:31:10.455 "ana_reporting": false 00:31:10.455 }, 00:31:10.455 "vs": { 00:31:10.455 "nvme_version": "1.3" 00:31:10.455 }, 00:31:10.455 "ns_data": { 00:31:10.455 "id": 1, 00:31:10.456 "can_share": true 00:31:10.456 } 00:31:10.456 } 00:31:10.456 ], 00:31:10.456 "mp_policy": "active_passive" 00:31:10.456 } 00:31:10.456 } 00:31:10.456 ] 00:31:10.456 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.456 14:33:33 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:10.456 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.456 14:33:33 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.NpfLnNRClG 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:10.456 rmmod nvme_tcp 00:31:10.456 rmmod nvme_fabrics 00:31:10.456 rmmod nvme_keyring 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 702383 ']' 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 702383 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 702383 ']' 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 702383 00:31:10.456 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 702383 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 702383' 00:31:10.717 killing process with pid 702383 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 702383 00:31:10.717 [2024-06-07 14:33:34.154303] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:31:10.717 [2024-06-07 14:33:34.154332] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 702383 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.717 14:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.259 14:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:13.259 00:31:13.259 real 0m12.269s 00:31:13.259 user 0m4.315s 00:31:13.259 sys 0m6.422s 00:31:13.259 14:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:13.259 14:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:13.259 ************************************ 00:31:13.259 END TEST nvmf_async_init 00:31:13.259 ************************************ 00:31:13.259 14:33:36 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.259 ************************************ 00:31:13.259 START TEST dma 00:31:13.259 ************************************ 00:31:13.259 14:33:36 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:13.259 * Looking for test storage... 00:31:13.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.259 14:33:36 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.259 14:33:36 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.259 14:33:36 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.259 14:33:36 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.259 14:33:36 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.259 14:33:36 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.259 14:33:36 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.259 14:33:36 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:31:13.259 14:33:36 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.259 14:33:36 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.259 14:33:36 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:13.259 14:33:36 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:31:13.259 00:31:13.259 real 0m0.131s 00:31:13.259 user 0m0.071s 00:31:13.259 sys 0m0.069s 00:31:13.259 14:33:36 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:13.259 14:33:36 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:31:13.259 ************************************ 00:31:13.259 END TEST dma 00:31:13.259 ************************************ 00:31:13.259 14:33:36 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:13.259 14:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:13.259 ************************************ 00:31:13.259 START TEST nvmf_identify 00:31:13.259 ************************************ 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:13.259 * Looking for test storage... 00:31:13.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:13.259 14:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:31:13.260 14:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:21.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:21.399 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:21.400 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:21.400 Found net devices under 0000:31:00.0: cvl_0_0 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:21.400 Found net devices under 0000:31:00.1: cvl_0_1 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:21.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:31:21.400 00:31:21.400 --- 10.0.0.2 ping statistics --- 00:31:21.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.400 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:31:21.400 00:31:21.400 --- 10.0.0.1 ping statistics --- 00:31:21.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.400 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=707296 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 707296 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 707296 ']' 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:21.400 14:33:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:21.400 [2024-06-07 14:33:44.867573] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:31:21.400 [2024-06-07 14:33:44.867640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.400 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.400 [2024-06-07 14:33:44.946583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.400 [2024-06-07 14:33:44.987926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.400 [2024-06-07 14:33:44.987966] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.400 [2024-06-07 14:33:44.987974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.400 [2024-06-07 14:33:44.987983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.400 [2024-06-07 14:33:44.987989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.400 [2024-06-07 14:33:44.988137] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.400 [2024-06-07 14:33:44.988335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.400 [2024-06-07 14:33:44.988336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.400 [2024-06-07 14:33:44.988237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 [2024-06-07 14:33:45.657719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 Malloc0 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 [2024-06-07 14:33:45.757077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.344 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.344 [ 00:31:22.344 { 00:31:22.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:22.345 "subtype": "Discovery", 00:31:22.345 "listen_addresses": [ 00:31:22.345 { 00:31:22.345 "trtype": "TCP", 00:31:22.345 "adrfam": "IPv4", 00:31:22.345 "traddr": "10.0.0.2", 00:31:22.345 "trsvcid": "4420" 00:31:22.345 } 00:31:22.345 ], 00:31:22.345 "allow_any_host": true, 00:31:22.345 "hosts": [] 00:31:22.345 }, 00:31:22.345 { 00:31:22.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.345 "subtype": "NVMe", 00:31:22.345 "listen_addresses": [ 00:31:22.345 { 00:31:22.345 "trtype": "TCP", 00:31:22.345 "adrfam": "IPv4", 00:31:22.345 "traddr": "10.0.0.2", 00:31:22.345 "trsvcid": "4420" 00:31:22.345 } 00:31:22.345 ], 00:31:22.345 "allow_any_host": true, 00:31:22.345 "hosts": [], 00:31:22.345 "serial_number": "SPDK00000000000001", 00:31:22.345 "model_number": "SPDK bdev Controller", 00:31:22.345 "max_namespaces": 32, 00:31:22.345 "min_cntlid": 1, 00:31:22.345 "max_cntlid": 65519, 00:31:22.345 "namespaces": [ 00:31:22.345 { 00:31:22.345 "nsid": 1, 00:31:22.345 "bdev_name": "Malloc0", 00:31:22.345 "name": "Malloc0", 00:31:22.345 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:22.345 "eui64": "ABCDEF0123456789", 00:31:22.345 "uuid": "c6fda70b-0be2-4e02-816e-8858a43ea973" 00:31:22.345 } 00:31:22.345 ] 00:31:22.345 } 00:31:22.345 ] 00:31:22.345 14:33:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.345 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:22.345 [2024-06-07 14:33:45.816467] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:31:22.345 [2024-06-07 14:33:45.816508] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707596 ] 00:31:22.345 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.345 [2024-06-07 14:33:45.848848] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:31:22.345 [2024-06-07 14:33:45.848889] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:22.345 [2024-06-07 14:33:45.848894] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:22.345 [2024-06-07 14:33:45.848905] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:22.345 [2024-06-07 14:33:45.848914] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:22.345 [2024-06-07 14:33:45.852225] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:31:22.345 [2024-06-07 14:33:45.852253] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23a7990 0 00:31:22.345 [2024-06-07 14:33:45.860202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:22.345 [2024-06-07 14:33:45.860220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:22.345 [2024-06-07 14:33:45.860224] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:22.345 [2024-06-07 14:33:45.860228] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:22.345 [2024-06-07 14:33:45.860261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.860267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.860271] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.860284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:22.345 [2024-06-07 14:33:45.860300] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.866202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.866212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.345 [2024-06-07 14:33:45.866215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.345 [2024-06-07 14:33:45.866230] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:22.345 [2024-06-07 14:33:45.866236] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:31:22.345 [2024-06-07 14:33:45.866241] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:31:22.345 [2024-06-07 14:33:45.866255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866262] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866266] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.866274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.345 [2024-06-07 14:33:45.866286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.866481] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.866488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.345 [2024-06-07 14:33:45.866491] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.345 [2024-06-07 14:33:45.866501] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:31:22.345 [2024-06-07 14:33:45.866509] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:31:22.345 [2024-06-07 14:33:45.866515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866519] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.866529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.345 [2024-06-07 14:33:45.866539] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.866755] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.866761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.345 [2024-06-07 14:33:45.866765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.345 [2024-06-07 14:33:45.866775] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:31:22.345 [2024-06-07 14:33:45.866783] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:31:22.345 [2024-06-07 14:33:45.866789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866793] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.866796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.866803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.345 [2024-06-07 14:33:45.866812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.867057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.867064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.345 [2024-06-07 14:33:45.867067] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867071] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.345 [2024-06-07 14:33:45.867077] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:22.345 [2024-06-07 14:33:45.867086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.867101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.345 [2024-06-07 14:33:45.867111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.867310] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.867317] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.345 [2024-06-07 14:33:45.867320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.345 [2024-06-07 14:33:45.867329] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:31:22.345 [2024-06-07 14:33:45.867334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:31:22.345 [2024-06-07 14:33:45.867341] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:22.345 [2024-06-07 14:33:45.867446] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:31:22.345 [2024-06-07 14:33:45.867451] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:22.345 [2024-06-07 14:33:45.867459] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867463] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.345 [2024-06-07 14:33:45.867466] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.345 [2024-06-07 14:33:45.867473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.345 [2024-06-07 14:33:45.867483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.345 [2024-06-07 14:33:45.867670] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.345 [2024-06-07 14:33:45.867677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.867680] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.867684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.867689] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:22.346 [2024-06-07 14:33:45.867698] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.867702] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.867705] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.867712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.346 [2024-06-07 14:33:45.867721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.346 [2024-06-07 14:33:45.867952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.346 [2024-06-07 14:33:45.867958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.867961] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.867965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.867970] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:22.346 [2024-06-07 14:33:45.867975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.867984] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:31:22.346 [2024-06-07 14:33:45.867992] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.868000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868003] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.346 [2024-06-07 14:33:45.868020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.346 [2024-06-07 14:33:45.868261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.346 [2024-06-07 14:33:45.868268] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.346 [2024-06-07 14:33:45.868272] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868276] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a7990): datao=0, datal=4096, cccid=0 00:31:22.346 [2024-06-07 14:33:45.868281] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24011f0) on tqpair(0x23a7990): expected_datao=0, payload_size=4096 00:31:22.346 [2024-06-07 14:33:45.868285] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868293] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868297] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.346 [2024-06-07 14:33:45.868463] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.868466] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868470] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.868478] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:31:22.346 [2024-06-07 14:33:45.868483] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:31:22.346 [2024-06-07 14:33:45.868487] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:31:22.346 [2024-06-07 14:33:45.868494] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:31:22.346 [2024-06-07 14:33:45.868499] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:31:22.346 [2024-06-07 14:33:45.868503] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.868511] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.868518] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868532] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:22.346 [2024-06-07 14:33:45.868543] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.346 [2024-06-07 14:33:45.868760] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.346 [2024-06-07 14:33:45.868766] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.868772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24011f0) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.868784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.346 [2024-06-07 14:33:45.868803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868810] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.346 [2024-06-07 14:33:45.868821] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.346 [2024-06-07 14:33:45.868840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.346 [2024-06-07 14:33:45.868857] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.868867] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:22.346 [2024-06-07 14:33:45.868873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.868877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.868883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.346 [2024-06-07 14:33:45.868895] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24011f0, cid 0, qid 0 00:31:22.346 [2024-06-07 14:33:45.868900] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401350, cid 1, qid 0 00:31:22.346 [2024-06-07 14:33:45.868905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24014b0, cid 2, qid 0 00:31:22.346 [2024-06-07 14:33:45.868909] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.346 [2024-06-07 14:33:45.868914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401770, cid 4, qid 0 00:31:22.346 [2024-06-07 14:33:45.869183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.346 [2024-06-07 14:33:45.869189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.869193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401770) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.869206] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:31:22.346 [2024-06-07 14:33:45.869211] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:31:22.346 [2024-06-07 14:33:45.869223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a7990) 00:31:22.346 [2024-06-07 14:33:45.869233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.346 [2024-06-07 14:33:45.869243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401770, cid 4, qid 0 00:31:22.346 [2024-06-07 14:33:45.869455] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.346 [2024-06-07 14:33:45.869461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.346 [2024-06-07 14:33:45.869465] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869468] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a7990): datao=0, datal=4096, cccid=4 00:31:22.346 [2024-06-07 14:33:45.869473] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2401770) on tqpair(0x23a7990): expected_datao=0, payload_size=4096 00:31:22.346 [2024-06-07 14:33:45.869477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869522] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869526] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869684] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.346 [2024-06-07 14:33:45.869690] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.346 [2024-06-07 14:33:45.869694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.346 [2024-06-07 14:33:45.869698] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401770) on tqpair=0x23a7990 00:31:22.346 [2024-06-07 14:33:45.869709] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:31:22.347 [2024-06-07 14:33:45.869728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.869732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a7990) 00:31:22.347 [2024-06-07 14:33:45.869739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-06-07 14:33:45.869745] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.869749] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.869753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a7990) 00:31:22.347 [2024-06-07 14:33:45.869758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.347 [2024-06-07 14:33:45.869773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401770, cid 4, qid 0 00:31:22.347 [2024-06-07 14:33:45.869779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24018d0, cid 5, qid 0 00:31:22.347 [2024-06-07 14:33:45.869994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.347 [2024-06-07 14:33:45.870001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.347 [2024-06-07 14:33:45.870004] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.870008] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a7990): datao=0, datal=1024, cccid=4 00:31:22.347 [2024-06-07 14:33:45.870012] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2401770) on tqpair(0x23a7990): expected_datao=0, payload_size=1024 00:31:22.347 [2024-06-07 14:33:45.870016] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.870023] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.870026] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.870032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.347 [2024-06-07 14:33:45.870039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.347 [2024-06-07 14:33:45.870043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.870046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24018d0) on tqpair=0x23a7990 00:31:22.347 [2024-06-07 14:33:45.911419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.347 [2024-06-07 14:33:45.911428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.347 [2024-06-07 14:33:45.911432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401770) on tqpair=0x23a7990 00:31:22.347 [2024-06-07 14:33:45.911446] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a7990) 00:31:22.347 [2024-06-07 14:33:45.911456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-06-07 14:33:45.911470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401770, cid 4, qid 0 00:31:22.347 [2024-06-07 14:33:45.911668] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.347 [2024-06-07 14:33:45.911675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.347 [2024-06-07 14:33:45.911678] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911682] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a7990): datao=0, datal=3072, cccid=4 00:31:22.347 [2024-06-07 14:33:45.911686] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2401770) on tqpair(0x23a7990): expected_datao=0, payload_size=3072 00:31:22.347 [2024-06-07 14:33:45.911690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911697] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911700] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.347 [2024-06-07 14:33:45.911876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.347 [2024-06-07 14:33:45.911879] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911883] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401770) on tqpair=0x23a7990 00:31:22.347 [2024-06-07 14:33:45.911892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.911895] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a7990) 00:31:22.347 [2024-06-07 14:33:45.911901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.347 [2024-06-07 14:33:45.911914] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401770, cid 4, qid 0 00:31:22.347 [2024-06-07 14:33:45.912172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.347 [2024-06-07 14:33:45.912178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.347 [2024-06-07 14:33:45.912181] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.912185] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a7990): datao=0, datal=8, cccid=4 00:31:22.347 [2024-06-07 14:33:45.912189] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2401770) on tqpair(0x23a7990): expected_datao=0, payload_size=8 00:31:22.347 [2024-06-07 14:33:45.912193] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.912204] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.912207] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.953380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.347 [2024-06-07 14:33:45.953389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.347 [2024-06-07 14:33:45.953396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.347 [2024-06-07 14:33:45.953400] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401770) on tqpair=0x23a7990 00:31:22.347 ===================================================== 00:31:22.347 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:22.347 ===================================================== 00:31:22.347 Controller Capabilities/Features 00:31:22.347 ================================ 00:31:22.347 Vendor ID: 0000 00:31:22.347 Subsystem Vendor ID: 0000 00:31:22.347 Serial Number: .................... 00:31:22.347 Model Number: ........................................ 00:31:22.347 Firmware Version: 24.09 00:31:22.347 Recommended Arb Burst: 0 00:31:22.347 IEEE OUI Identifier: 00 00 00 00:31:22.347 Multi-path I/O 00:31:22.347 May have multiple subsystem ports: No 00:31:22.347 May have multiple controllers: No 00:31:22.347 Associated with SR-IOV VF: No 00:31:22.347 Max Data Transfer Size: 131072 00:31:22.347 Max Number of Namespaces: 0 00:31:22.347 Max Number of I/O Queues: 1024 00:31:22.347 NVMe Specification Version (VS): 1.3 00:31:22.347 NVMe Specification Version (Identify): 1.3 00:31:22.347 Maximum Queue Entries: 128 00:31:22.347 Contiguous Queues Required: Yes 00:31:22.347 Arbitration Mechanisms Supported 00:31:22.347 Weighted Round Robin: Not Supported 00:31:22.347 Vendor Specific: Not Supported 00:31:22.347 Reset Timeout: 15000 ms 00:31:22.347 Doorbell Stride: 4 bytes 00:31:22.347 NVM Subsystem Reset: Not Supported 00:31:22.347 Command Sets Supported 00:31:22.347 NVM Command Set: Supported 00:31:22.347 Boot Partition: Not Supported 00:31:22.347 Memory Page Size Minimum: 4096 bytes 00:31:22.347 Memory Page Size Maximum: 4096 bytes 00:31:22.347 Persistent Memory Region: Not Supported 00:31:22.347 Optional Asynchronous Events Supported 00:31:22.347 Namespace Attribute Notices: Not Supported 00:31:22.347 Firmware Activation Notices: Not Supported 00:31:22.347 ANA Change Notices: Not Supported 00:31:22.347 PLE Aggregate Log Change Notices: Not Supported 00:31:22.347 LBA Status Info Alert Notices: Not Supported 00:31:22.347 EGE Aggregate Log Change Notices: Not Supported 00:31:22.347 Normal NVM Subsystem Shutdown event: Not Supported 00:31:22.347 Zone Descriptor Change Notices: Not Supported 00:31:22.347 Discovery Log Change Notices: Supported 00:31:22.347 Controller Attributes 00:31:22.347 128-bit Host Identifier: Not Supported 00:31:22.347 Non-Operational Permissive Mode: Not Supported 00:31:22.347 NVM Sets: Not Supported 00:31:22.347 Read Recovery Levels: Not Supported 00:31:22.347 Endurance Groups: Not Supported 00:31:22.347 Predictable Latency Mode: Not Supported 00:31:22.347 Traffic Based Keep ALive: Not Supported 00:31:22.347 Namespace Granularity: Not Supported 00:31:22.347 SQ Associations: Not Supported 00:31:22.347 UUID List: Not Supported 00:31:22.347 Multi-Domain Subsystem: Not Supported 00:31:22.347 Fixed Capacity Management: Not Supported 00:31:22.347 Variable Capacity Management: Not Supported 00:31:22.347 Delete Endurance Group: Not Supported 00:31:22.347 Delete NVM Set: Not Supported 00:31:22.347 Extended LBA Formats Supported: Not Supported 00:31:22.347 Flexible Data Placement Supported: Not Supported 00:31:22.347 00:31:22.347 Controller Memory Buffer Support 00:31:22.347 ================================ 00:31:22.347 Supported: No 00:31:22.347 00:31:22.347 Persistent Memory Region Support 00:31:22.347 ================================ 00:31:22.347 Supported: No 00:31:22.347 00:31:22.347 Admin Command Set Attributes 00:31:22.347 ============================ 00:31:22.347 Security Send/Receive: Not Supported 00:31:22.347 Format NVM: Not Supported 00:31:22.347 Firmware Activate/Download: Not Supported 00:31:22.347 Namespace Management: Not Supported 00:31:22.347 Device Self-Test: Not Supported 00:31:22.347 Directives: Not Supported 00:31:22.347 NVMe-MI: Not Supported 00:31:22.347 Virtualization Management: Not Supported 00:31:22.347 Doorbell Buffer Config: Not Supported 00:31:22.347 Get LBA Status Capability: Not Supported 00:31:22.347 Command & Feature Lockdown Capability: Not Supported 00:31:22.348 Abort Command Limit: 1 00:31:22.348 Async Event Request Limit: 4 00:31:22.348 Number of Firmware Slots: N/A 00:31:22.348 Firmware Slot 1 Read-Only: N/A 00:31:22.348 Firmware Activation Without Reset: N/A 00:31:22.348 Multiple Update Detection Support: N/A 00:31:22.348 Firmware Update Granularity: No Information Provided 00:31:22.348 Per-Namespace SMART Log: No 00:31:22.348 Asymmetric Namespace Access Log Page: Not Supported 00:31:22.348 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:22.348 Command Effects Log Page: Not Supported 00:31:22.348 Get Log Page Extended Data: Supported 00:31:22.348 Telemetry Log Pages: Not Supported 00:31:22.348 Persistent Event Log Pages: Not Supported 00:31:22.348 Supported Log Pages Log Page: May Support 00:31:22.348 Commands Supported & Effects Log Page: Not Supported 00:31:22.348 Feature Identifiers & Effects Log Page:May Support 00:31:22.348 NVMe-MI Commands & Effects Log Page: May Support 00:31:22.348 Data Area 4 for Telemetry Log: Not Supported 00:31:22.348 Error Log Page Entries Supported: 128 00:31:22.348 Keep Alive: Not Supported 00:31:22.348 00:31:22.348 NVM Command Set Attributes 00:31:22.348 ========================== 00:31:22.348 Submission Queue Entry Size 00:31:22.348 Max: 1 00:31:22.348 Min: 1 00:31:22.348 Completion Queue Entry Size 00:31:22.348 Max: 1 00:31:22.348 Min: 1 00:31:22.348 Number of Namespaces: 0 00:31:22.348 Compare Command: Not Supported 00:31:22.348 Write Uncorrectable Command: Not Supported 00:31:22.348 Dataset Management Command: Not Supported 00:31:22.348 Write Zeroes Command: Not Supported 00:31:22.348 Set Features Save Field: Not Supported 00:31:22.348 Reservations: Not Supported 00:31:22.348 Timestamp: Not Supported 00:31:22.348 Copy: Not Supported 00:31:22.348 Volatile Write Cache: Not Present 00:31:22.348 Atomic Write Unit (Normal): 1 00:31:22.348 Atomic Write Unit (PFail): 1 00:31:22.348 Atomic Compare & Write Unit: 1 00:31:22.348 Fused Compare & Write: Supported 00:31:22.348 Scatter-Gather List 00:31:22.348 SGL Command Set: Supported 00:31:22.348 SGL Keyed: Supported 00:31:22.348 SGL Bit Bucket Descriptor: Not Supported 00:31:22.348 SGL Metadata Pointer: Not Supported 00:31:22.348 Oversized SGL: Not Supported 00:31:22.348 SGL Metadata Address: Not Supported 00:31:22.348 SGL Offset: Supported 00:31:22.348 Transport SGL Data Block: Not Supported 00:31:22.348 Replay Protected Memory Block: Not Supported 00:31:22.348 00:31:22.348 Firmware Slot Information 00:31:22.348 ========================= 00:31:22.348 Active slot: 0 00:31:22.348 00:31:22.348 00:31:22.348 Error Log 00:31:22.348 ========= 00:31:22.348 00:31:22.348 Active Namespaces 00:31:22.348 ================= 00:31:22.348 Discovery Log Page 00:31:22.348 ================== 00:31:22.348 Generation Counter: 2 00:31:22.348 Number of Records: 2 00:31:22.348 Record Format: 0 00:31:22.348 00:31:22.348 Discovery Log Entry 0 00:31:22.348 ---------------------- 00:31:22.348 Transport Type: 3 (TCP) 00:31:22.348 Address Family: 1 (IPv4) 00:31:22.348 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:22.348 Entry Flags: 00:31:22.348 Duplicate Returned Information: 1 00:31:22.348 Explicit Persistent Connection Support for Discovery: 1 00:31:22.348 Transport Requirements: 00:31:22.348 Secure Channel: Not Required 00:31:22.348 Port ID: 0 (0x0000) 00:31:22.348 Controller ID: 65535 (0xffff) 00:31:22.348 Admin Max SQ Size: 128 00:31:22.348 Transport Service Identifier: 4420 00:31:22.348 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:22.348 Transport Address: 10.0.0.2 00:31:22.348 Discovery Log Entry 1 00:31:22.348 ---------------------- 00:31:22.348 Transport Type: 3 (TCP) 00:31:22.348 Address Family: 1 (IPv4) 00:31:22.348 Subsystem Type: 2 (NVM Subsystem) 00:31:22.348 Entry Flags: 00:31:22.348 Duplicate Returned Information: 0 00:31:22.348 Explicit Persistent Connection Support for Discovery: 0 00:31:22.348 Transport Requirements: 00:31:22.348 Secure Channel: Not Required 00:31:22.348 Port ID: 0 (0x0000) 00:31:22.348 Controller ID: 65535 (0xffff) 00:31:22.348 Admin Max SQ Size: 128 00:31:22.348 Transport Service Identifier: 4420 00:31:22.348 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:22.348 Transport Address: 10.0.0.2 [2024-06-07 14:33:45.953482] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:31:22.348 [2024-06-07 14:33:45.953495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-06-07 14:33:45.953502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-06-07 14:33:45.953508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-06-07 14:33:45.953514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.348 [2024-06-07 14:33:45.953522] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953529] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.348 [2024-06-07 14:33:45.953536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-06-07 14:33:45.953548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.348 [2024-06-07 14:33:45.953636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.348 [2024-06-07 14:33:45.953642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.348 [2024-06-07 14:33:45.953645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.348 [2024-06-07 14:33:45.953659] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953663] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.348 [2024-06-07 14:33:45.953673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-06-07 14:33:45.953685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.348 [2024-06-07 14:33:45.953867] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.348 [2024-06-07 14:33:45.953873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.348 [2024-06-07 14:33:45.953877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953880] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.348 [2024-06-07 14:33:45.953886] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:31:22.348 [2024-06-07 14:33:45.953890] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:31:22.348 [2024-06-07 14:33:45.953899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.953906] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.348 [2024-06-07 14:33:45.953913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-06-07 14:33:45.953922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.348 [2024-06-07 14:33:45.954171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.348 [2024-06-07 14:33:45.954177] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.348 [2024-06-07 14:33:45.954182] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954186] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.348 [2024-06-07 14:33:45.954201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954209] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.348 [2024-06-07 14:33:45.954216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-06-07 14:33:45.954226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.348 [2024-06-07 14:33:45.954422] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.348 [2024-06-07 14:33:45.954428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.348 [2024-06-07 14:33:45.954431] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954435] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.348 [2024-06-07 14:33:45.954445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.348 [2024-06-07 14:33:45.954452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.348 [2024-06-07 14:33:45.954459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.348 [2024-06-07 14:33:45.954468] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.349 [2024-06-07 14:33:45.954676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.349 [2024-06-07 14:33:45.954682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.349 [2024-06-07 14:33:45.954686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954690] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.349 [2024-06-07 14:33:45.954700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.349 [2024-06-07 14:33:45.954714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.349 [2024-06-07 14:33:45.954723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.349 [2024-06-07 14:33:45.954925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.349 [2024-06-07 14:33:45.954931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.349 [2024-06-07 14:33:45.954934] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.349 [2024-06-07 14:33:45.954948] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.954955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.349 [2024-06-07 14:33:45.954962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.349 [2024-06-07 14:33:45.954971] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.349 [2024-06-07 14:33:45.955176] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.349 [2024-06-07 14:33:45.955182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.349 [2024-06-07 14:33:45.955186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.955192] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.349 [2024-06-07 14:33:45.959208] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.959212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.959216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a7990) 00:31:22.349 [2024-06-07 14:33:45.959222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.349 [2024-06-07 14:33:45.959233] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2401610, cid 3, qid 0 00:31:22.349 [2024-06-07 14:33:45.959416] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.349 [2024-06-07 14:33:45.959422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.349 [2024-06-07 14:33:45.959426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.349 [2024-06-07 14:33:45.959429] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2401610) on tqpair=0x23a7990 00:31:22.349 [2024-06-07 14:33:45.959437] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:31:22.349 00:31:22.349 14:33:45 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:22.614 [2024-06-07 14:33:45.994947] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:31:22.614 [2024-06-07 14:33:45.995004] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707604 ] 00:31:22.614 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.614 [2024-06-07 14:33:46.027746] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:31:22.614 [2024-06-07 14:33:46.027783] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:22.614 [2024-06-07 14:33:46.027788] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:22.614 [2024-06-07 14:33:46.027799] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:22.614 [2024-06-07 14:33:46.027807] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:22.614 [2024-06-07 14:33:46.028225] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:31:22.614 [2024-06-07 14:33:46.028250] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4b9990 0 00:31:22.614 [2024-06-07 14:33:46.034202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:22.614 [2024-06-07 14:33:46.034211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:22.614 [2024-06-07 14:33:46.034215] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:22.614 [2024-06-07 14:33:46.034218] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:22.614 [2024-06-07 14:33:46.034246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.034251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.034255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.034267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:22.614 [2024-06-07 14:33:46.034281] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.042203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.042212] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.614 [2024-06-07 14:33:46.042216] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.614 [2024-06-07 14:33:46.042229] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:22.614 [2024-06-07 14:33:46.042234] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:31:22.614 [2024-06-07 14:33:46.042239] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:31:22.614 [2024-06-07 14:33:46.042251] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042255] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042258] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.042266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.614 [2024-06-07 14:33:46.042278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.042459] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.042466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.614 [2024-06-07 14:33:46.042469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042473] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.614 [2024-06-07 14:33:46.042478] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:31:22.614 [2024-06-07 14:33:46.042485] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:31:22.614 [2024-06-07 14:33:46.042492] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042495] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042499] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.042505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.614 [2024-06-07 14:33:46.042515] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.042708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.042714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.614 [2024-06-07 14:33:46.042718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.614 [2024-06-07 14:33:46.042726] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:31:22.614 [2024-06-07 14:33:46.042734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:31:22.614 [2024-06-07 14:33:46.042740] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042744] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.042747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.042754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.614 [2024-06-07 14:33:46.042764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.046200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.046210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.614 [2024-06-07 14:33:46.046214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.614 [2024-06-07 14:33:46.046223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:22.614 [2024-06-07 14:33:46.046232] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.046246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.614 [2024-06-07 14:33:46.046256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.046424] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.046430] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.614 [2024-06-07 14:33:46.046434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046437] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.614 [2024-06-07 14:33:46.046442] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:31:22.614 [2024-06-07 14:33:46.046446] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:31:22.614 [2024-06-07 14:33:46.046454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:22.614 [2024-06-07 14:33:46.046559] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:31:22.614 [2024-06-07 14:33:46.046563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:22.614 [2024-06-07 14:33:46.046570] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.614 [2024-06-07 14:33:46.046577] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.614 [2024-06-07 14:33:46.046584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.614 [2024-06-07 14:33:46.046593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.614 [2024-06-07 14:33:46.046758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.614 [2024-06-07 14:33:46.046764] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.615 [2024-06-07 14:33:46.046767] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.046771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.615 [2024-06-07 14:33:46.046776] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:22.615 [2024-06-07 14:33:46.046785] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.046788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.046792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.046798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.615 [2024-06-07 14:33:46.046808] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.615 [2024-06-07 14:33:46.047038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.615 [2024-06-07 14:33:46.047044] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.615 [2024-06-07 14:33:46.047048] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.047052] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.615 [2024-06-07 14:33:46.047056] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:22.615 [2024-06-07 14:33:46.047061] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.047068] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:31:22.615 [2024-06-07 14:33:46.047076] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.047084] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.047088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.047094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.615 [2024-06-07 14:33:46.047104] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.615 [2024-06-07 14:33:46.047302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.615 [2024-06-07 14:33:46.047309] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.615 [2024-06-07 14:33:46.047313] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.047316] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=4096, cccid=0 00:31:22.615 [2024-06-07 14:33:46.047321] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5131f0) on tqpair(0x4b9990): expected_datao=0, payload_size=4096 00:31:22.615 [2024-06-07 14:33:46.047325] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.047342] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.047346] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.615 [2024-06-07 14:33:46.093211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.615 [2024-06-07 14:33:46.093214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.615 [2024-06-07 14:33:46.093225] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:31:22.615 [2024-06-07 14:33:46.093230] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:31:22.615 [2024-06-07 14:33:46.093234] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:31:22.615 [2024-06-07 14:33:46.093240] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:31:22.615 [2024-06-07 14:33:46.093245] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:31:22.615 [2024-06-07 14:33:46.093250] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093273] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:22.615 [2024-06-07 14:33:46.093292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.615 [2024-06-07 14:33:46.093470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.615 [2024-06-07 14:33:46.093476] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.615 [2024-06-07 14:33:46.093480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093483] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5131f0) on tqpair=0x4b9990 00:31:22.615 [2024-06-07 14:33:46.093490] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093493] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093497] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.615 [2024-06-07 14:33:46.093509] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093516] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.615 [2024-06-07 14:33:46.093528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093531] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.615 [2024-06-07 14:33:46.093546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.615 [2024-06-07 14:33:46.093563] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093573] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093579] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.615 [2024-06-07 14:33:46.093590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.615 [2024-06-07 14:33:46.093601] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5131f0, cid 0, qid 0 00:31:22.615 [2024-06-07 14:33:46.093606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513350, cid 1, qid 0 00:31:22.615 [2024-06-07 14:33:46.093611] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5134b0, cid 2, qid 0 00:31:22.615 [2024-06-07 14:33:46.093615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.615 [2024-06-07 14:33:46.093620] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.615 [2024-06-07 14:33:46.093827] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.615 [2024-06-07 14:33:46.093835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.615 [2024-06-07 14:33:46.093838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.615 [2024-06-07 14:33:46.093847] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:31:22.615 [2024-06-07 14:33:46.093851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093859] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093864] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:22.615 [2024-06-07 14:33:46.093871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.615 [2024-06-07 14:33:46.093874] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.093878] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.093884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:22.616 [2024-06-07 14:33:46.093894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.616 [2024-06-07 14:33:46.094088] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.094094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.094098] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.094153] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.094162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.094169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094173] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.094179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.616 [2024-06-07 14:33:46.094189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.616 [2024-06-07 14:33:46.094388] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.616 [2024-06-07 14:33:46.094395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.616 [2024-06-07 14:33:46.094398] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094402] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=4096, cccid=4 00:31:22.616 [2024-06-07 14:33:46.094406] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513770) on tqpair(0x4b9990): expected_datao=0, payload_size=4096 00:31:22.616 [2024-06-07 14:33:46.094411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094417] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094421] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094644] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.094650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.094654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094659] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.094667] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:31:22.616 [2024-06-07 14:33:46.094680] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.094689] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.094696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094699] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.094706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.616 [2024-06-07 14:33:46.094716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.616 [2024-06-07 14:33:46.094912] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.616 [2024-06-07 14:33:46.094918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.616 [2024-06-07 14:33:46.094922] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094925] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=4096, cccid=4 00:31:22.616 [2024-06-07 14:33:46.094929] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513770) on tqpair(0x4b9990): expected_datao=0, payload_size=4096 00:31:22.616 [2024-06-07 14:33:46.094934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094940] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.094944] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095097] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.095103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.095107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095110] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.095121] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095130] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.095146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.616 [2024-06-07 14:33:46.095156] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.616 [2024-06-07 14:33:46.095360] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.616 [2024-06-07 14:33:46.095366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.616 [2024-06-07 14:33:46.095370] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095373] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=4096, cccid=4 00:31:22.616 [2024-06-07 14:33:46.095378] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513770) on tqpair(0x4b9990): expected_datao=0, payload_size=4096 00:31:22.616 [2024-06-07 14:33:46.095382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095388] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095392] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095602] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.095608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.095612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095616] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.095622] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095629] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095652] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:31:22.616 [2024-06-07 14:33:46.095657] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:31:22.616 [2024-06-07 14:33:46.095662] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:31:22.616 [2024-06-07 14:33:46.095676] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095680] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.095687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.616 [2024-06-07 14:33:46.095693] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095700] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b9990) 00:31:22.616 [2024-06-07 14:33:46.095706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.616 [2024-06-07 14:33:46.095718] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.616 [2024-06-07 14:33:46.095723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5138d0, cid 5, qid 0 00:31:22.616 [2024-06-07 14:33:46.095944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.095950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.095953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.095964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.616 [2024-06-07 14:33:46.095969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.616 [2024-06-07 14:33:46.095973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5138d0) on tqpair=0x4b9990 00:31:22.616 [2024-06-07 14:33:46.095985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.616 [2024-06-07 14:33:46.095989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.095995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5138d0, cid 5, qid 0 00:31:22.617 [2024-06-07 14:33:46.096245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.096252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.096255] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096259] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5138d0) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.096268] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5138d0, cid 5, qid 0 00:31:22.617 [2024-06-07 14:33:46.096477] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.096483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.096486] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5138d0) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.096499] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096502] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096518] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5138d0, cid 5, qid 0 00:31:22.617 [2024-06-07 14:33:46.096747] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.096753] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.096756] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096760] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5138d0) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.096770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096803] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096807] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096820] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.096823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4b9990) 00:31:22.617 [2024-06-07 14:33:46.096830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.617 [2024-06-07 14:33:46.096840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5138d0, cid 5, qid 0 00:31:22.617 [2024-06-07 14:33:46.096848] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513770, cid 4, qid 0 00:31:22.617 [2024-06-07 14:33:46.096852] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513a30, cid 6, qid 0 00:31:22.617 [2024-06-07 14:33:46.096857] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513b90, cid 7, qid 0 00:31:22.617 [2024-06-07 14:33:46.100202] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.617 [2024-06-07 14:33:46.100210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.617 [2024-06-07 14:33:46.100213] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100217] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=8192, cccid=5 00:31:22.617 [2024-06-07 14:33:46.100221] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5138d0) on tqpair(0x4b9990): expected_datao=0, payload_size=8192 00:31:22.617 [2024-06-07 14:33:46.100225] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100232] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100236] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100241] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.617 [2024-06-07 14:33:46.100247] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.617 [2024-06-07 14:33:46.100250] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100254] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=512, cccid=4 00:31:22.617 [2024-06-07 14:33:46.100258] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513770) on tqpair(0x4b9990): expected_datao=0, payload_size=512 00:31:22.617 [2024-06-07 14:33:46.100262] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100268] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100272] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.617 [2024-06-07 14:33:46.100283] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.617 [2024-06-07 14:33:46.100286] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100290] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=512, cccid=6 00:31:22.617 [2024-06-07 14:33:46.100294] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513a30) on tqpair(0x4b9990): expected_datao=0, payload_size=512 00:31:22.617 [2024-06-07 14:33:46.100298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100304] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100308] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:22.617 [2024-06-07 14:33:46.100319] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:22.617 [2024-06-07 14:33:46.100322] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100326] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b9990): datao=0, datal=4096, cccid=7 00:31:22.617 [2024-06-07 14:33:46.100330] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x513b90) on tqpair(0x4b9990): expected_datao=0, payload_size=4096 00:31:22.617 [2024-06-07 14:33:46.100334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100341] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100344] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.100355] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.100359] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5138d0) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.100376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.100382] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.100386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513770) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.100398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.100403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.100407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.617 [2024-06-07 14:33:46.100410] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513a30) on tqpair=0x4b9990 00:31:22.617 [2024-06-07 14:33:46.100419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.617 [2024-06-07 14:33:46.100425] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.617 [2024-06-07 14:33:46.100428] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.618 [2024-06-07 14:33:46.100431] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513b90) on tqpair=0x4b9990 00:31:22.618 ===================================================== 00:31:22.618 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:22.618 ===================================================== 00:31:22.618 Controller Capabilities/Features 00:31:22.618 ================================ 00:31:22.618 Vendor ID: 8086 00:31:22.618 Subsystem Vendor ID: 8086 00:31:22.618 Serial Number: SPDK00000000000001 00:31:22.618 Model Number: SPDK bdev Controller 00:31:22.618 Firmware Version: 24.09 00:31:22.618 Recommended Arb Burst: 6 00:31:22.618 IEEE OUI Identifier: e4 d2 5c 00:31:22.618 Multi-path I/O 00:31:22.618 May have multiple subsystem ports: Yes 00:31:22.618 May have multiple controllers: Yes 00:31:22.618 Associated with SR-IOV VF: No 00:31:22.618 Max Data Transfer Size: 131072 00:31:22.618 Max Number of Namespaces: 32 00:31:22.618 Max Number of I/O Queues: 127 00:31:22.618 NVMe Specification Version (VS): 1.3 00:31:22.618 NVMe Specification Version (Identify): 1.3 00:31:22.618 Maximum Queue Entries: 128 00:31:22.618 Contiguous Queues Required: Yes 00:31:22.618 Arbitration Mechanisms Supported 00:31:22.618 Weighted Round Robin: Not Supported 00:31:22.618 Vendor Specific: Not Supported 00:31:22.618 Reset Timeout: 15000 ms 00:31:22.618 Doorbell Stride: 4 bytes 00:31:22.618 NVM Subsystem Reset: Not Supported 00:31:22.618 Command Sets Supported 00:31:22.618 NVM Command Set: Supported 00:31:22.618 Boot Partition: Not Supported 00:31:22.618 Memory Page Size Minimum: 4096 bytes 00:31:22.618 Memory Page Size Maximum: 4096 bytes 00:31:22.618 Persistent Memory Region: Not Supported 00:31:22.618 Optional Asynchronous Events Supported 00:31:22.618 Namespace Attribute Notices: Supported 00:31:22.618 Firmware Activation Notices: Not Supported 00:31:22.618 ANA Change Notices: Not Supported 00:31:22.618 PLE Aggregate Log Change Notices: Not Supported 00:31:22.618 LBA Status Info Alert Notices: Not Supported 00:31:22.618 EGE Aggregate Log Change Notices: Not Supported 00:31:22.618 Normal NVM Subsystem Shutdown event: Not Supported 00:31:22.618 Zone Descriptor Change Notices: Not Supported 00:31:22.618 Discovery Log Change Notices: Not Supported 00:31:22.618 Controller Attributes 00:31:22.618 128-bit Host Identifier: Supported 00:31:22.618 Non-Operational Permissive Mode: Not Supported 00:31:22.618 NVM Sets: Not Supported 00:31:22.618 Read Recovery Levels: Not Supported 00:31:22.618 Endurance Groups: Not Supported 00:31:22.618 Predictable Latency Mode: Not Supported 00:31:22.618 Traffic Based Keep ALive: Not Supported 00:31:22.618 Namespace Granularity: Not Supported 00:31:22.618 SQ Associations: Not Supported 00:31:22.618 UUID List: Not Supported 00:31:22.618 Multi-Domain Subsystem: Not Supported 00:31:22.618 Fixed Capacity Management: Not Supported 00:31:22.618 Variable Capacity Management: Not Supported 00:31:22.618 Delete Endurance Group: Not Supported 00:31:22.618 Delete NVM Set: Not Supported 00:31:22.618 Extended LBA Formats Supported: Not Supported 00:31:22.618 Flexible Data Placement Supported: Not Supported 00:31:22.618 00:31:22.618 Controller Memory Buffer Support 00:31:22.618 ================================ 00:31:22.618 Supported: No 00:31:22.618 00:31:22.618 Persistent Memory Region Support 00:31:22.618 ================================ 00:31:22.618 Supported: No 00:31:22.618 00:31:22.618 Admin Command Set Attributes 00:31:22.618 ============================ 00:31:22.618 Security Send/Receive: Not Supported 00:31:22.618 Format NVM: Not Supported 00:31:22.618 Firmware Activate/Download: Not Supported 00:31:22.618 Namespace Management: Not Supported 00:31:22.618 Device Self-Test: Not Supported 00:31:22.618 Directives: Not Supported 00:31:22.618 NVMe-MI: Not Supported 00:31:22.618 Virtualization Management: Not Supported 00:31:22.618 Doorbell Buffer Config: Not Supported 00:31:22.618 Get LBA Status Capability: Not Supported 00:31:22.618 Command & Feature Lockdown Capability: Not Supported 00:31:22.618 Abort Command Limit: 4 00:31:22.618 Async Event Request Limit: 4 00:31:22.618 Number of Firmware Slots: N/A 00:31:22.618 Firmware Slot 1 Read-Only: N/A 00:31:22.618 Firmware Activation Without Reset: N/A 00:31:22.618 Multiple Update Detection Support: N/A 00:31:22.618 Firmware Update Granularity: No Information Provided 00:31:22.618 Per-Namespace SMART Log: No 00:31:22.618 Asymmetric Namespace Access Log Page: Not Supported 00:31:22.618 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:22.618 Command Effects Log Page: Supported 00:31:22.618 Get Log Page Extended Data: Supported 00:31:22.618 Telemetry Log Pages: Not Supported 00:31:22.618 Persistent Event Log Pages: Not Supported 00:31:22.618 Supported Log Pages Log Page: May Support 00:31:22.618 Commands Supported & Effects Log Page: Not Supported 00:31:22.618 Feature Identifiers & Effects Log Page:May Support 00:31:22.618 NVMe-MI Commands & Effects Log Page: May Support 00:31:22.618 Data Area 4 for Telemetry Log: Not Supported 00:31:22.618 Error Log Page Entries Supported: 128 00:31:22.618 Keep Alive: Supported 00:31:22.618 Keep Alive Granularity: 10000 ms 00:31:22.618 00:31:22.618 NVM Command Set Attributes 00:31:22.618 ========================== 00:31:22.618 Submission Queue Entry Size 00:31:22.618 Max: 64 00:31:22.618 Min: 64 00:31:22.618 Completion Queue Entry Size 00:31:22.618 Max: 16 00:31:22.618 Min: 16 00:31:22.618 Number of Namespaces: 32 00:31:22.618 Compare Command: Supported 00:31:22.618 Write Uncorrectable Command: Not Supported 00:31:22.618 Dataset Management Command: Supported 00:31:22.618 Write Zeroes Command: Supported 00:31:22.618 Set Features Save Field: Not Supported 00:31:22.618 Reservations: Supported 00:31:22.618 Timestamp: Not Supported 00:31:22.618 Copy: Supported 00:31:22.618 Volatile Write Cache: Present 00:31:22.618 Atomic Write Unit (Normal): 1 00:31:22.618 Atomic Write Unit (PFail): 1 00:31:22.618 Atomic Compare & Write Unit: 1 00:31:22.618 Fused Compare & Write: Supported 00:31:22.618 Scatter-Gather List 00:31:22.618 SGL Command Set: Supported 00:31:22.618 SGL Keyed: Supported 00:31:22.618 SGL Bit Bucket Descriptor: Not Supported 00:31:22.618 SGL Metadata Pointer: Not Supported 00:31:22.618 Oversized SGL: Not Supported 00:31:22.618 SGL Metadata Address: Not Supported 00:31:22.618 SGL Offset: Supported 00:31:22.618 Transport SGL Data Block: Not Supported 00:31:22.618 Replay Protected Memory Block: Not Supported 00:31:22.618 00:31:22.618 Firmware Slot Information 00:31:22.618 ========================= 00:31:22.618 Active slot: 1 00:31:22.618 Slot 1 Firmware Revision: 24.09 00:31:22.618 00:31:22.618 00:31:22.618 Commands Supported and Effects 00:31:22.618 ============================== 00:31:22.618 Admin Commands 00:31:22.618 -------------- 00:31:22.618 Get Log Page (02h): Supported 00:31:22.618 Identify (06h): Supported 00:31:22.618 Abort (08h): Supported 00:31:22.618 Set Features (09h): Supported 00:31:22.618 Get Features (0Ah): Supported 00:31:22.618 Asynchronous Event Request (0Ch): Supported 00:31:22.618 Keep Alive (18h): Supported 00:31:22.618 I/O Commands 00:31:22.618 ------------ 00:31:22.618 Flush (00h): Supported LBA-Change 00:31:22.618 Write (01h): Supported LBA-Change 00:31:22.619 Read (02h): Supported 00:31:22.619 Compare (05h): Supported 00:31:22.619 Write Zeroes (08h): Supported LBA-Change 00:31:22.619 Dataset Management (09h): Supported LBA-Change 00:31:22.619 Copy (19h): Supported LBA-Change 00:31:22.619 Unknown (79h): Supported LBA-Change 00:31:22.619 Unknown (7Ah): Supported 00:31:22.619 00:31:22.619 Error Log 00:31:22.619 ========= 00:31:22.619 00:31:22.619 Arbitration 00:31:22.619 =========== 00:31:22.619 Arbitration Burst: 1 00:31:22.619 00:31:22.619 Power Management 00:31:22.619 ================ 00:31:22.619 Number of Power States: 1 00:31:22.619 Current Power State: Power State #0 00:31:22.619 Power State #0: 00:31:22.619 Max Power: 0.00 W 00:31:22.619 Non-Operational State: Operational 00:31:22.619 Entry Latency: Not Reported 00:31:22.619 Exit Latency: Not Reported 00:31:22.619 Relative Read Throughput: 0 00:31:22.619 Relative Read Latency: 0 00:31:22.619 Relative Write Throughput: 0 00:31:22.619 Relative Write Latency: 0 00:31:22.619 Idle Power: Not Reported 00:31:22.619 Active Power: Not Reported 00:31:22.619 Non-Operational Permissive Mode: Not Supported 00:31:22.619 00:31:22.619 Health Information 00:31:22.619 ================== 00:31:22.619 Critical Warnings: 00:31:22.619 Available Spare Space: OK 00:31:22.619 Temperature: OK 00:31:22.619 Device Reliability: OK 00:31:22.619 Read Only: No 00:31:22.619 Volatile Memory Backup: OK 00:31:22.619 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:22.619 Temperature Threshold: [2024-06-07 14:33:46.100530] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.100536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.100542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.100554] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513b90, cid 7, qid 0 00:31:22.619 [2024-06-07 14:33:46.100776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.100782] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.100786] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.100789] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513b90) on tqpair=0x4b9990 00:31:22.619 [2024-06-07 14:33:46.100818] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:31:22.619 [2024-06-07 14:33:46.100829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.619 [2024-06-07 14:33:46.100835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.619 [2024-06-07 14:33:46.100841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.619 [2024-06-07 14:33:46.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.619 [2024-06-07 14:33:46.100855] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.100858] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.100862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.100869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.100880] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.619 [2024-06-07 14:33:46.101075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.101081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.101085] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101089] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.619 [2024-06-07 14:33:46.101095] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101101] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.101111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.101124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.619 [2024-06-07 14:33:46.101405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.101412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.101415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.619 [2024-06-07 14:33:46.101424] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:31:22.619 [2024-06-07 14:33:46.101428] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:31:22.619 [2024-06-07 14:33:46.101437] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101441] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101445] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.101451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.101461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.619 [2024-06-07 14:33:46.101657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.101663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.101666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.619 [2024-06-07 14:33:46.101679] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.101693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.101702] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.619 [2024-06-07 14:33:46.101930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.101936] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.101940] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.619 [2024-06-07 14:33:46.101953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.619 [2024-06-07 14:33:46.101960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.619 [2024-06-07 14:33:46.101966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.619 [2024-06-07 14:33:46.101976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.619 [2024-06-07 14:33:46.102183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.619 [2024-06-07 14:33:46.102189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.619 [2024-06-07 14:33:46.102192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.102210] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102214] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102218] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.102224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.102234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.102384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.102390] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.102394] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.102407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.102421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.102430] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.102648] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.102654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.102658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.102670] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102674] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.102684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.102694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.102888] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.102894] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.102897] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102901] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.102910] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102914] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.102918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.102924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.102934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.103090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.103096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.103099] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103103] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.103114] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.103128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.103138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.103342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.103349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.103352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103356] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.103365] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103369] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103373] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.103379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.103389] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.103592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.103599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.103602] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103606] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.103615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103619] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103622] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.103629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.103638] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.103898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.103904] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.103907] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.103920] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.103927] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.103934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.103943] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.104150] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.104156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.104159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.104163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.104172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.104178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.104181] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b9990) 00:31:22.620 [2024-06-07 14:33:46.104188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.620 [2024-06-07 14:33:46.108203] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x513610, cid 3, qid 0 00:31:22.620 [2024-06-07 14:33:46.108424] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:22.620 [2024-06-07 14:33:46.108431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:22.620 [2024-06-07 14:33:46.108434] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:22.620 [2024-06-07 14:33:46.108438] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x513610) on tqpair=0x4b9990 00:31:22.620 [2024-06-07 14:33:46.108445] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:31:22.620 0 Kelvin (-273 Celsius) 00:31:22.620 Available Spare: 0% 00:31:22.620 Available Spare Threshold: 0% 00:31:22.620 Life Percentage Used: 0% 00:31:22.620 Data Units Read: 0 00:31:22.620 Data Units Written: 0 00:31:22.620 Host Read Commands: 0 00:31:22.620 Host Write Commands: 0 00:31:22.620 Controller Busy Time: 0 minutes 00:31:22.620 Power Cycles: 0 00:31:22.620 Power On Hours: 0 hours 00:31:22.620 Unsafe Shutdowns: 0 00:31:22.620 Unrecoverable Media Errors: 0 00:31:22.620 Lifetime Error Log Entries: 0 00:31:22.620 Warning Temperature Time: 0 minutes 00:31:22.620 Critical Temperature Time: 0 minutes 00:31:22.620 00:31:22.620 Number of Queues 00:31:22.620 ================ 00:31:22.620 Number of I/O Submission Queues: 127 00:31:22.620 Number of I/O Completion Queues: 127 00:31:22.620 00:31:22.620 Active Namespaces 00:31:22.620 ================= 00:31:22.620 Namespace ID:1 00:31:22.620 Error Recovery Timeout: Unlimited 00:31:22.620 Command Set Identifier: NVM (00h) 00:31:22.620 Deallocate: Supported 00:31:22.620 Deallocated/Unwritten Error: Not Supported 00:31:22.620 Deallocated Read Value: Unknown 00:31:22.620 Deallocate in Write Zeroes: Not Supported 00:31:22.620 Deallocated Guard Field: 0xFFFF 00:31:22.620 Flush: Supported 00:31:22.620 Reservation: Supported 00:31:22.620 Namespace Sharing Capabilities: Multiple Controllers 00:31:22.620 Size (in LBAs): 131072 (0GiB) 00:31:22.620 Capacity (in LBAs): 131072 (0GiB) 00:31:22.620 Utilization (in LBAs): 131072 (0GiB) 00:31:22.620 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:22.620 EUI64: ABCDEF0123456789 00:31:22.621 UUID: c6fda70b-0be2-4e02-816e-8858a43ea973 00:31:22.621 Thin Provisioning: Not Supported 00:31:22.621 Per-NS Atomic Units: Yes 00:31:22.621 Atomic Boundary Size (Normal): 0 00:31:22.621 Atomic Boundary Size (PFail): 0 00:31:22.621 Atomic Boundary Offset: 0 00:31:22.621 Maximum Single Source Range Length: 65535 00:31:22.621 Maximum Copy Length: 65535 00:31:22.621 Maximum Source Range Count: 1 00:31:22.621 NGUID/EUI64 Never Reused: No 00:31:22.621 Namespace Write Protected: No 00:31:22.621 Number of LBA Formats: 1 00:31:22.621 Current LBA Format: LBA Format #00 00:31:22.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:22.621 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:22.621 rmmod nvme_tcp 00:31:22.621 rmmod nvme_fabrics 00:31:22.621 rmmod nvme_keyring 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 707296 ']' 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 707296 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 707296 ']' 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 707296 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 707296 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 707296' 00:31:22.621 killing process with pid 707296 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 707296 00:31:22.621 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 707296 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.882 14:33:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.427 14:33:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:25.427 00:31:25.427 real 0m11.826s 00:31:25.427 user 0m7.823s 00:31:25.427 sys 0m6.413s 00:31:25.427 14:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:25.427 14:33:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 ************************************ 00:31:25.427 END TEST nvmf_identify 00:31:25.427 ************************************ 00:31:25.427 14:33:48 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:25.427 14:33:48 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:25.427 14:33:48 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:25.427 14:33:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:25.427 ************************************ 00:31:25.427 START TEST nvmf_perf 00:31:25.427 ************************************ 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:25.427 * Looking for test storage... 00:31:25.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:31:25.427 14:33:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:33.595 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:33.595 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:33.595 Found net devices under 0000:31:00.0: cvl_0_0 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:33.595 Found net devices under 0000:31:00.1: cvl_0_1 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:33.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:31:33.595 00:31:33.595 --- 10.0.0.2 ping statistics --- 00:31:33.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.595 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:31:33.595 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:33.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:31:33.595 00:31:33.595 --- 10.0.0.1 ping statistics --- 00:31:33.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.595 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=712273 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 712273 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 712273 ']' 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:33.596 14:33:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:33.596 [2024-06-07 14:33:57.023850] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:31:33.596 [2024-06-07 14:33:57.023923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.596 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.596 [2024-06-07 14:33:57.102162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.596 [2024-06-07 14:33:57.143040] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.596 [2024-06-07 14:33:57.143078] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.596 [2024-06-07 14:33:57.143086] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.596 [2024-06-07 14:33:57.143093] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.596 [2024-06-07 14:33:57.143099] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.596 [2024-06-07 14:33:57.143234] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.596 [2024-06-07 14:33:57.143364] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.596 [2024-06-07 14:33:57.143511] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.596 [2024-06-07 14:33:57.143512] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:34.166 14:33:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:34.166 14:33:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:31:34.166 14:33:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:34.166 14:33:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:34.166 14:33:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:34.426 14:33:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:34.426 14:33:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:34.426 14:33:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:34.687 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:34.687 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:34.947 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:34.947 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:35.208 [2024-06-07 14:33:58.810096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.208 14:33:58 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.469 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:35.469 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.730 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:35.730 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:35.730 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.991 [2024-06-07 14:33:59.492610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.991 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.252 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:36.252 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:36.252 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:36.252 14:33:59 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:37.636 Initializing NVMe Controllers 00:31:37.636 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:37.636 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:37.636 Initialization complete. Launching workers. 00:31:37.636 ======================================================== 00:31:37.636 Latency(us) 00:31:37.636 Device Information : IOPS MiB/s Average min max 00:31:37.636 PCIE (0000:65:00.0) NSID 1 from core 0: 78768.74 307.69 405.75 13.36 5856.59 00:31:37.636 ======================================================== 00:31:37.636 Total : 78768.74 307.69 405.75 13.36 5856.59 00:31:37.636 00:31:37.636 14:34:00 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:37.636 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.019 Initializing NVMe Controllers 00:31:39.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:39.019 Initialization complete. Launching workers. 00:31:39.019 ======================================================== 00:31:39.019 Latency(us) 00:31:39.019 Device Information : IOPS MiB/s Average min max 00:31:39.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 56.00 0.22 18224.64 276.49 45950.19 00:31:39.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.26 6979.09 47889.30 00:31:39.019 ======================================================== 00:31:39.019 Total : 122.00 0.48 16599.86 276.49 47889.30 00:31:39.019 00:31:39.019 14:34:02 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.019 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.403 Initializing NVMe Controllers 00:31:40.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:40.403 Initialization complete. Launching workers. 00:31:40.403 ======================================================== 00:31:40.403 Latency(us) 00:31:40.403 Device Information : IOPS MiB/s Average min max 00:31:40.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10755.94 42.02 2976.89 469.43 8802.13 00:31:40.403 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3670.57 14.34 8726.01 4839.24 16964.29 00:31:40.403 ======================================================== 00:31:40.403 Total : 14426.51 56.35 4439.65 469.43 16964.29 00:31:40.403 00:31:40.403 14:34:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:40.403 14:34:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:40.403 14:34:03 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.403 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.942 Initializing NVMe Controllers 00:31:42.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.943 Controller IO queue size 128, less than required. 00:31:42.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.943 Controller IO queue size 128, less than required. 00:31:42.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:42.943 Initialization complete. Launching workers. 00:31:42.943 ======================================================== 00:31:42.943 Latency(us) 00:31:42.943 Device Information : IOPS MiB/s Average min max 00:31:42.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.92 399.48 81372.81 53866.79 137494.90 00:31:42.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.29 144.82 227699.95 70258.58 359212.35 00:31:42.943 ======================================================== 00:31:42.943 Total : 2177.21 544.30 120306.04 53866.79 359212.35 00:31:42.943 00:31:42.943 14:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:42.943 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.202 No valid NVMe controllers or AIO or URING devices found 00:31:43.202 Initializing NVMe Controllers 00:31:43.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.202 Controller IO queue size 128, less than required. 00:31:43.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:43.202 Controller IO queue size 128, less than required. 00:31:43.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:43.202 WARNING: Some requested NVMe devices were skipped 00:31:43.202 14:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:43.202 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.743 Initializing NVMe Controllers 00:31:45.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.743 Controller IO queue size 128, less than required. 00:31:45.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:45.743 Controller IO queue size 128, less than required. 00:31:45.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:45.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:45.743 Initialization complete. Launching workers. 00:31:45.743 00:31:45.743 ==================== 00:31:45.743 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:45.743 TCP transport: 00:31:45.743 polls: 22169 00:31:45.743 idle_polls: 10682 00:31:45.743 sock_completions: 11487 00:31:45.743 nvme_completions: 6531 00:31:45.743 submitted_requests: 9758 00:31:45.743 queued_requests: 1 00:31:45.743 00:31:45.743 ==================== 00:31:45.743 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:45.743 TCP transport: 00:31:45.743 polls: 24540 00:31:45.743 idle_polls: 13154 00:31:45.743 sock_completions: 11386 00:31:45.743 nvme_completions: 6511 00:31:45.743 submitted_requests: 9786 00:31:45.743 queued_requests: 1 00:31:45.743 ======================================================== 00:31:45.743 Latency(us) 00:31:45.743 Device Information : IOPS MiB/s Average min max 00:31:45.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1629.82 407.45 80335.08 44625.71 137282.19 00:31:45.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1624.82 406.21 78966.19 36887.78 140828.26 00:31:45.743 ======================================================== 00:31:45.743 Total : 3254.64 813.66 79651.69 36887.78 140828.26 00:31:45.743 00:31:45.743 14:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:45.743 14:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.743 14:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:45.743 14:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:31:45.743 14:34:09 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=56d35c58-0494-42af-8080-c5d15152d329 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 56d35c58-0494-42af-8080-c5d15152d329 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=56d35c58-0494-42af-8080-c5d15152d329 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:31:47.125 { 00:31:47.125 "uuid": "56d35c58-0494-42af-8080-c5d15152d329", 00:31:47.125 "name": "lvs_0", 00:31:47.125 "base_bdev": "Nvme0n1", 00:31:47.125 "total_data_clusters": 457407, 00:31:47.125 "free_clusters": 457407, 00:31:47.125 "block_size": 512, 00:31:47.125 "cluster_size": 4194304 00:31:47.125 } 00:31:47.125 ]' 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="56d35c58-0494-42af-8080-c5d15152d329") .free_clusters' 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=457407 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="56d35c58-0494-42af-8080-c5d15152d329") .cluster_size' 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1829628 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1829628 00:31:47.125 1829628 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:47.125 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56d35c58-0494-42af-8080-c5d15152d329 lbd_0 20480 00:31:47.385 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=006ea823-6f2a-41fc-9249-fd0190c78ee9 00:31:47.385 14:34:10 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 006ea823-6f2a-41fc-9249-fd0190c78ee9 lvs_n_0 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:31:48.767 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:31:49.028 { 00:31:49.028 "uuid": "56d35c58-0494-42af-8080-c5d15152d329", 00:31:49.028 "name": "lvs_0", 00:31:49.028 "base_bdev": "Nvme0n1", 00:31:49.028 "total_data_clusters": 457407, 00:31:49.028 "free_clusters": 452287, 00:31:49.028 "block_size": 512, 00:31:49.028 "cluster_size": 4194304 00:31:49.028 }, 00:31:49.028 { 00:31:49.028 "uuid": "bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203", 00:31:49.028 "name": "lvs_n_0", 00:31:49.028 "base_bdev": "006ea823-6f2a-41fc-9249-fd0190c78ee9", 00:31:49.028 "total_data_clusters": 5114, 00:31:49.028 "free_clusters": 5114, 00:31:49.028 "block_size": 512, 00:31:49.028 "cluster_size": 4194304 00:31:49.028 } 00:31:49.028 ]' 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203") .free_clusters' 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203") .cluster_size' 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:31:49.028 20456 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:49.028 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bacb5d3f-bbf2-45c9-a77a-39d9c8fa7203 lbd_nest_0 20456 00:31:49.289 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=de106df4-f317-448a-8e24-4646526d1333 00:31:49.289 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.549 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:49.549 14:34:12 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 de106df4-f317-448a-8e24-4646526d1333 00:31:49.549 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.809 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:49.809 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:49.809 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:49.809 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.809 14:34:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.809 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.067 Initializing NVMe Controllers 00:32:02.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:02.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:02.067 Initialization complete. Launching workers. 00:32:02.067 ======================================================== 00:32:02.067 Latency(us) 00:32:02.067 Device Information : IOPS MiB/s Average min max 00:32:02.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.00 0.02 21306.28 126.04 49486.72 00:32:02.067 ======================================================== 00:32:02.067 Total : 47.00 0.02 21306.28 126.04 49486.72 00:32:02.067 00:32:02.067 14:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:02.067 14:34:23 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:02.067 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.061 Initializing NVMe Controllers 00:32:12.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:12.061 Initialization complete. Launching workers. 00:32:12.061 ======================================================== 00:32:12.061 Latency(us) 00:32:12.061 Device Information : IOPS MiB/s Average min max 00:32:12.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 63.19 7.90 15837.42 6984.21 55869.19 00:32:12.061 ======================================================== 00:32:12.061 Total : 63.19 7.90 15837.42 6984.21 55869.19 00:32:12.061 00:32:12.061 14:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:12.061 14:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:12.061 14:34:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:12.061 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.053 Initializing NVMe Controllers 00:32:22.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.053 Initialization complete. Launching workers. 00:32:22.053 ======================================================== 00:32:22.053 Latency(us) 00:32:22.053 Device Information : IOPS MiB/s Average min max 00:32:22.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8649.34 4.22 3699.91 292.13 7539.19 00:32:22.053 ======================================================== 00:32:22.054 Total : 8649.34 4.22 3699.91 292.13 7539.19 00:32:22.054 00:32:22.054 14:34:44 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.054 14:34:44 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.054 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.049 Initializing NVMe Controllers 00:32:32.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.049 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.049 Initialization complete. Launching workers. 00:32:32.049 ======================================================== 00:32:32.049 Latency(us) 00:32:32.049 Device Information : IOPS MiB/s Average min max 00:32:32.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3551.70 443.96 9015.24 547.46 22101.01 00:32:32.050 ======================================================== 00:32:32.050 Total : 3551.70 443.96 9015.24 547.46 22101.01 00:32:32.050 00:32:32.050 14:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:32.050 14:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:32.050 14:34:54 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.050 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.086 Initializing NVMe Controllers 00:32:42.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.086 Controller IO queue size 128, less than required. 00:32:42.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.086 Initialization complete. Launching workers. 00:32:42.086 ======================================================== 00:32:42.086 Latency(us) 00:32:42.086 Device Information : IOPS MiB/s Average min max 00:32:42.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15918.19 7.77 8045.93 1946.30 49418.36 00:32:42.086 ======================================================== 00:32:42.086 Total : 15918.19 7.77 8045.93 1946.30 49418.36 00:32:42.086 00:32:42.086 14:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:42.086 14:35:04 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.086 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.089 Initializing NVMe Controllers 00:32:52.089 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.089 Controller IO queue size 128, less than required. 00:32:52.089 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:52.089 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.089 Initialization complete. Launching workers. 00:32:52.089 ======================================================== 00:32:52.089 Latency(us) 00:32:52.089 Device Information : IOPS MiB/s Average min max 00:32:52.089 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1175.90 146.99 109414.05 15784.57 230519.32 00:32:52.089 ======================================================== 00:32:52.089 Total : 1175.90 146.99 109414.05 15784.57 230519.32 00:32:52.089 00:32:52.089 14:35:15 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:52.089 14:35:15 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete de106df4-f317-448a-8e24-4646526d1333 00:32:53.474 14:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:53.733 14:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 006ea823-6f2a-41fc-9249-fd0190c78ee9 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:53.993 rmmod nvme_tcp 00:32:53.993 rmmod nvme_fabrics 00:32:53.993 rmmod nvme_keyring 00:32:53.993 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 712273 ']' 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 712273 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 712273 ']' 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 712273 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 712273 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 712273' 00:32:54.255 killing process with pid 712273 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 712273 00:32:54.255 14:35:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 712273 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:56.166 14:35:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.709 14:35:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:58.709 00:32:58.709 real 1m33.205s 00:32:58.709 user 5m27.000s 00:32:58.709 sys 0m15.108s 00:32:58.709 14:35:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:32:58.709 14:35:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:58.709 ************************************ 00:32:58.709 END TEST nvmf_perf 00:32:58.709 ************************************ 00:32:58.709 14:35:21 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:58.709 14:35:21 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:32:58.709 14:35:21 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:32:58.709 14:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:58.709 ************************************ 00:32:58.709 START TEST nvmf_fio_host 00:32:58.709 ************************************ 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:58.709 * Looking for test storage... 00:32:58.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:58.709 14:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:58.710 14:35:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:06.849 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:06.849 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:06.849 Found net devices under 0000:31:00.0: cvl_0_0 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:06.849 Found net devices under 0000:31:00.1: cvl_0_1 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.849 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:06.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:33:06.849 00:33:06.849 --- 10.0.0.2 ping statistics --- 00:33:06.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.850 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:33:06.850 00:33:06.850 --- 10.0.0.1 ping statistics --- 00:33:06.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.850 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=732384 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 732384 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 732384 ']' 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.850 14:35:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:06.850 [2024-06-07 14:35:29.547675] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:33:06.850 [2024-06-07 14:35:29.547723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.850 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.850 [2024-06-07 14:35:29.618498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:06.850 [2024-06-07 14:35:29.650998] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.850 [2024-06-07 14:35:29.651031] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.850 [2024-06-07 14:35:29.651038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.850 [2024-06-07 14:35:29.651049] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.850 [2024-06-07 14:35:29.651054] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.850 [2024-06-07 14:35:29.651188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.850 [2024-06-07 14:35:29.651228] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:06.850 [2024-06-07 14:35:29.651327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.850 [2024-06-07 14:35:29.651328] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:06.850 [2024-06-07 14:35:30.460544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:06.850 14:35:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.109 14:35:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:07.109 Malloc1 00:33:07.109 14:35:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:07.367 14:35:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:07.626 14:35:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.626 [2024-06-07 14:35:31.173870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.626 14:35:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:07.885 14:35:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:08.143 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:08.143 fio-3.35 00:33:08.143 Starting 1 thread 00:33:08.143 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.680 00:33:10.680 test: (groupid=0, jobs=1): err= 0: pid=733152: Fri Jun 7 14:35:34 2024 00:33:10.680 read: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(111MiB/2004msec) 00:33:10.680 slat (usec): min=2, max=275, avg= 2.18, stdev= 2.15 00:33:10.680 clat (usec): min=3210, max=8649, avg=4947.32, stdev=350.81 00:33:10.680 lat (usec): min=3212, max=8651, avg=4949.50, stdev=350.84 00:33:10.680 clat percentiles (usec): 00:33:10.680 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4686], 00:33:10.680 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5014], 00:33:10.680 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:33:10.680 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 7111], 99.95th=[ 7701], 00:33:10.680 | 99.99th=[ 8586] 00:33:10.680 bw ( KiB/s): min=55576, max=57400, per=100.00%, avg=56838.00, stdev=848.66, samples=4 00:33:10.680 iops : min=13894, max=14350, avg=14209.50, stdev=212.17, samples=4 00:33:10.680 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(111MiB/2004msec); 0 zone resets 00:33:10.680 slat (usec): min=2, max=191, avg= 2.26, stdev= 1.33 00:33:10.680 clat (usec): min=2564, max=7850, avg=3998.47, stdev=300.09 00:33:10.680 lat (usec): min=2582, max=7852, avg=4000.73, stdev=300.14 00:33:10.680 clat percentiles (usec): 00:33:10.680 | 1.00th=[ 3326], 5.00th=[ 3556], 10.00th=[ 3654], 20.00th=[ 3785], 00:33:10.680 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4047], 00:33:10.680 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4424], 00:33:10.680 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 6456], 99.95th=[ 7046], 00:33:10.680 | 99.99th=[ 7767] 00:33:10.680 bw ( KiB/s): min=55952, max=57280, per=99.94%, avg=56884.00, stdev=626.44, samples=4 00:33:10.680 iops : min=13988, max=14320, avg=14221.00, stdev=156.61, samples=4 00:33:10.680 lat (msec) : 4=25.69%, 10=74.31% 00:33:10.680 cpu : usr=74.34%, sys=24.01%, ctx=47, majf=0, minf=15 00:33:10.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:10.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:10.680 issued rwts: total=28473,28515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:10.680 00:33:10.680 Run status group 0 (all jobs): 00:33:10.680 READ: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=111MiB (117MB), run=2004-2004msec 00:33:10.680 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=111MiB (117MB), run=2004-2004msec 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:10.680 14:35:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:10.943 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:10.943 fio-3.35 00:33:10.943 Starting 1 thread 00:33:10.943 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.483 00:33:13.483 test: (groupid=0, jobs=1): err= 0: pid=733741: Fri Jun 7 14:35:36 2024 00:33:13.483 read: IOPS=9326, BW=146MiB/s (153MB/s)(292MiB/2006msec) 00:33:13.483 slat (usec): min=3, max=107, avg= 3.66, stdev= 1.56 00:33:13.483 clat (usec): min=1732, max=15383, avg=8291.81, stdev=1828.13 00:33:13.483 lat (usec): min=1735, max=15386, avg=8295.47, stdev=1828.24 00:33:13.483 clat percentiles (usec): 00:33:13.483 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6652], 00:33:13.483 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8291], 60.00th=[ 8848], 00:33:13.483 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10683], 95.00th=[11076], 00:33:13.483 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14222], 99.95th=[14484], 00:33:13.483 | 99.99th=[15270] 00:33:13.483 bw ( KiB/s): min=63264, max=82304, per=49.42%, avg=73752.00, stdev=7868.68, samples=4 00:33:13.483 iops : min= 3954, max= 5144, avg=4609.50, stdev=491.79, samples=4 00:33:13.483 write: IOPS=5486, BW=85.7MiB/s (89.9MB/s)(151MiB/1762msec); 0 zone resets 00:33:13.483 slat (usec): min=40, max=322, avg=41.09, stdev= 6.94 00:33:13.483 clat (usec): min=1852, max=15289, avg=9506.13, stdev=1524.49 00:33:13.483 lat (usec): min=1893, max=15330, avg=9547.22, stdev=1525.71 00:33:13.483 clat percentiles (usec): 00:33:13.483 | 1.00th=[ 6194], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8291], 00:33:13.483 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:33:13.483 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11338], 95.00th=[12125], 00:33:13.483 | 99.00th=[14091], 99.50th=[14353], 99.90th=[14877], 99.95th=[15008], 00:33:13.483 | 99.99th=[15270] 00:33:13.483 bw ( KiB/s): min=65984, max=86016, per=87.45%, avg=76768.00, stdev=8226.64, samples=4 00:33:13.483 iops : min= 4124, max= 5376, avg=4798.00, stdev=514.16, samples=4 00:33:13.483 lat (msec) : 2=0.03%, 4=0.44%, 10=75.25%, 20=24.28% 00:33:13.483 cpu : usr=83.94%, sys=14.31%, ctx=15, majf=0, minf=40 00:33:13.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:33:13.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:13.483 issued rwts: total=18709,9667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:13.483 00:33:13.483 Run status group 0 (all jobs): 00:33:13.483 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (307MB), run=2006-2006msec 00:33:13.483 WRITE: bw=85.7MiB/s (89.9MB/s), 85.7MiB/s-85.7MiB/s (89.9MB/s-89.9MB/s), io=151MiB (158MB), run=1762-1762msec 00:33:13.483 14:35:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:33:13.483 14:35:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:14.053 Nvme0n1 00:33:14.053 14:35:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=c0258acc-4339-4455-8f0d-d64f77290104 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb c0258acc-4339-4455-8f0d-d64f77290104 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=c0258acc-4339-4455-8f0d-d64f77290104 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:33:14.624 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:33:14.884 { 00:33:14.884 "uuid": "c0258acc-4339-4455-8f0d-d64f77290104", 00:33:14.884 "name": "lvs_0", 00:33:14.884 "base_bdev": "Nvme0n1", 00:33:14.884 "total_data_clusters": 1787, 00:33:14.884 "free_clusters": 1787, 00:33:14.884 "block_size": 512, 00:33:14.884 "cluster_size": 1073741824 00:33:14.884 } 00:33:14.884 ]' 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="c0258acc-4339-4455-8f0d-d64f77290104") .free_clusters' 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1787 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="c0258acc-4339-4455-8f0d-d64f77290104") .cluster_size' 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1829888 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1829888 00:33:14.884 1829888 00:33:14.884 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:15.144 b1c45b5b-19a5-4238-bf7b-a573792f0516 00:33:15.144 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:15.144 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:15.404 14:35:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:15.665 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:15.666 14:35:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:15.926 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:15.926 fio-3.35 00:33:15.926 Starting 1 thread 00:33:15.926 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.471 00:33:18.471 test: (groupid=0, jobs=1): err= 0: pid=734939: Fri Jun 7 14:35:41 2024 00:33:18.471 read: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.5MiB/2005msec) 00:33:18.471 slat (usec): min=2, max=117, avg= 2.21, stdev= 1.19 00:33:18.471 clat (usec): min=2347, max=11412, avg=6712.44, stdev=497.33 00:33:18.471 lat (usec): min=2363, max=11414, avg=6714.65, stdev=497.27 00:33:18.471 clat percentiles (usec): 00:33:18.471 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:33:18.471 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:33:18.471 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:33:18.471 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[ 9241], 99.95th=[10945], 00:33:18.471 | 99.99th=[11207] 00:33:18.471 bw ( KiB/s): min=40952, max=42672, per=99.88%, avg=42078.00, stdev=803.63, samples=4 00:33:18.471 iops : min=10238, max=10668, avg=10519.50, stdev=200.91, samples=4 00:33:18.471 write: IOPS=10.5k, BW=41.1MiB/s (43.1MB/s)(82.4MiB/2005msec); 0 zone resets 00:33:18.471 slat (nsec): min=2136, max=97441, avg=2300.24, stdev=713.65 00:33:18.471 clat (usec): min=1236, max=9317, avg=5370.74, stdev=422.66 00:33:18.471 lat (usec): min=1244, max=9320, avg=5373.04, stdev=422.64 00:33:18.471 clat percentiles (usec): 00:33:18.471 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5014], 00:33:18.471 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5407], 60.00th=[ 5473], 00:33:18.471 | 70.00th=[ 5604], 80.00th=[ 5669], 90.00th=[ 5866], 95.00th=[ 5997], 00:33:18.471 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 7439], 99.95th=[ 8586], 00:33:18.471 | 99.99th=[ 9241] 00:33:18.471 bw ( KiB/s): min=41536, max=42496, per=100.00%, avg=42112.00, stdev=408.13, samples=4 00:33:18.471 iops : min=10384, max=10624, avg=10528.00, stdev=102.03, samples=4 00:33:18.471 lat (msec) : 2=0.02%, 4=0.09%, 10=99.85%, 20=0.04% 00:33:18.471 cpu : usr=72.85%, sys=25.90%, ctx=58, majf=0, minf=20 00:33:18.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:18.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:18.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:18.471 issued rwts: total=21116,21106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:18.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:18.471 00:33:18.471 Run status group 0 (all jobs): 00:33:18.471 READ: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.5MiB (86.5MB), run=2005-2005msec 00:33:18.471 WRITE: bw=41.1MiB/s (43.1MB/s), 41.1MiB/s-41.1MiB/s (43.1MB/s-43.1MB/s), io=82.4MiB (86.5MB), run=2005-2005msec 00:33:18.471 14:35:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:18.471 14:35:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d0390a56-2897-4218-b31b-fe9210375548 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d0390a56-2897-4218-b31b-fe9210375548 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=d0390a56-2897-4218-b31b-fe9210375548 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:33:19.414 14:35:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:19.414 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:33:19.414 { 00:33:19.414 "uuid": "c0258acc-4339-4455-8f0d-d64f77290104", 00:33:19.414 "name": "lvs_0", 00:33:19.414 "base_bdev": "Nvme0n1", 00:33:19.414 "total_data_clusters": 1787, 00:33:19.414 "free_clusters": 0, 00:33:19.414 "block_size": 512, 00:33:19.414 "cluster_size": 1073741824 00:33:19.414 }, 00:33:19.414 { 00:33:19.414 "uuid": "d0390a56-2897-4218-b31b-fe9210375548", 00:33:19.414 "name": "lvs_n_0", 00:33:19.414 "base_bdev": "b1c45b5b-19a5-4238-bf7b-a573792f0516", 00:33:19.415 "total_data_clusters": 457025, 00:33:19.415 "free_clusters": 457025, 00:33:19.415 "block_size": 512, 00:33:19.415 "cluster_size": 4194304 00:33:19.415 } 00:33:19.415 ]' 00:33:19.415 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="d0390a56-2897-4218-b31b-fe9210375548") .free_clusters' 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=457025 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d0390a56-2897-4218-b31b-fe9210375548") .cluster_size' 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1828100 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1828100 00:33:19.674 1828100 00:33:19.674 14:35:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:20.613 a2c7838e-c02e-4cba-9448-4163a1d01a64 00:33:20.613 14:35:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:20.873 14:35:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:20.873 14:35:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:21.134 14:35:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:21.702 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:21.702 fio-3.35 00:33:21.702 Starting 1 thread 00:33:21.702 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.247 00:33:24.247 test: (groupid=0, jobs=1): err= 0: pid=736122: Fri Jun 7 14:35:47 2024 00:33:24.247 read: IOPS=9343, BW=36.5MiB/s (38.3MB/s)(73.2MiB/2006msec) 00:33:24.247 slat (usec): min=2, max=108, avg= 2.24, stdev= 1.13 00:33:24.247 clat (usec): min=2763, max=12578, avg=7574.35, stdev=578.48 00:33:24.247 lat (usec): min=2779, max=12580, avg=7576.59, stdev=578.42 00:33:24.247 clat percentiles (usec): 00:33:24.247 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:33:24.247 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:33:24.247 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:33:24.247 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[10683], 99.95th=[11600], 00:33:24.247 | 99.99th=[12518] 00:33:24.247 bw ( KiB/s): min=36184, max=37976, per=99.95%, avg=37356.00, stdev=798.01, samples=4 00:33:24.247 iops : min= 9046, max= 9494, avg=9339.00, stdev=199.50, samples=4 00:33:24.247 write: IOPS=9349, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec); 0 zone resets 00:33:24.247 slat (nsec): min=2145, max=108216, avg=2328.00, stdev=828.30 00:33:24.247 clat (usec): min=1062, max=11607, avg=6037.62, stdev=510.19 00:33:24.247 lat (usec): min=1070, max=11609, avg=6039.95, stdev=510.17 00:33:24.247 clat percentiles (usec): 00:33:24.247 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:33:24.247 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:33:24.247 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6652], 95.00th=[ 6783], 00:33:24.247 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 9765], 99.95th=[10683], 00:33:24.247 | 99.99th=[11600] 00:33:24.247 bw ( KiB/s): min=37120, max=37744, per=99.97%, avg=37388.00, stdev=285.13, samples=4 00:33:24.247 iops : min= 9280, max= 9436, avg=9347.00, stdev=71.28, samples=4 00:33:24.247 lat (msec) : 2=0.01%, 4=0.10%, 10=99.79%, 20=0.10% 00:33:24.247 cpu : usr=71.22%, sys=27.53%, ctx=27, majf=0, minf=20 00:33:24.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:24.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:24.247 issued rwts: total=18744,18755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.247 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:24.247 00:33:24.247 Run status group 0 (all jobs): 00:33:24.247 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.2MiB (76.8MB), run=2006-2006msec 00:33:24.247 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:33:24.247 14:35:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:24.247 14:35:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:24.247 14:35:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:26.161 14:35:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:26.422 14:35:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:26.992 14:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:26.992 14:35:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:29.535 rmmod nvme_tcp 00:33:29.535 rmmod nvme_fabrics 00:33:29.535 rmmod nvme_keyring 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 732384 ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 732384 ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 732384' 00:33:29.535 killing process with pid 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 732384 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:29.535 14:35:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.520 14:35:54 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:31.520 00:33:31.520 real 0m33.132s 00:33:31.520 user 2m45.782s 00:33:31.520 sys 0m9.818s 00:33:31.520 14:35:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:33:31.520 14:35:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.520 ************************************ 00:33:31.520 END TEST nvmf_fio_host 00:33:31.520 ************************************ 00:33:31.520 14:35:54 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:31.520 14:35:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:33:31.520 14:35:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:33:31.520 14:35:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.520 ************************************ 00:33:31.520 START TEST nvmf_failover 00:33:31.520 ************************************ 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:31.520 * Looking for test storage... 00:33:31.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.520 14:35:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.781 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:31.781 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:31.781 14:35:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:33:31.781 14:35:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.923 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:39.924 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:39.924 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:39.924 Found net devices under 0000:31:00.0: cvl_0_0 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:39.924 Found net devices under 0000:31:00.1: cvl_0_1 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:39.924 14:36:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:39.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:33:39.924 00:33:39.924 --- 10.0.0.2 ping statistics --- 00:33:39.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.924 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:33:39.924 00:33:39.924 --- 10.0.0.1 ping statistics --- 00:33:39.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.924 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=742246 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 742246 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 742246 ']' 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:39.924 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.924 [2024-06-07 14:36:03.166144] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:33:39.924 [2024-06-07 14:36:03.166200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.924 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.925 [2024-06-07 14:36:03.230321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:39.925 [2024-06-07 14:36:03.259819] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.925 [2024-06-07 14:36:03.259853] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.925 [2024-06-07 14:36:03.259859] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.925 [2024-06-07 14:36:03.259863] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.925 [2024-06-07 14:36:03.259867] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.925 [2024-06-07 14:36:03.259980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:33:39.925 [2024-06-07 14:36:03.260139] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.925 [2024-06-07 14:36:03.260140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:39.925 [2024-06-07 14:36:03.511350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.925 14:36:03 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:40.186 Malloc0 00:33:40.186 14:36:03 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:40.447 14:36:03 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.447 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.708 [2024-06-07 14:36:04.206229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.708 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:40.969 [2024-06-07 14:36:04.374773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:40.969 [2024-06-07 14:36:04.543270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=742598 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 742598 /var/tmp/bdevperf.sock 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 742598 ']' 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:40.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:40.969 14:36:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:41.912 14:36:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:41.912 14:36:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:33:41.912 14:36:05 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.173 NVMe0n1 00:33:42.173 14:36:05 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.433 00:33:42.433 14:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=742941 00:33:42.433 14:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:42.433 14:36:06 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:43.817 14:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.817 [2024-06-07 14:36:07.175227] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175287] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175340] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.817 [2024-06-07 14:36:07.175344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e34f0 is same with the state(5) to be set 00:33:43.818 14:36:07 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:47.115 14:36:10 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:47.115 00:33:47.115 14:36:10 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:47.115 [2024-06-07 14:36:10.650157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650228] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650233] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650247] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650252] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650261] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650274] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650288] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650292] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650297] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650305] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650310] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650314] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650346] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650370] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.115 [2024-06-07 14:36:10.650374] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.116 [2024-06-07 14:36:10.650379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.116 [2024-06-07 14:36:10.650383] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.116 [2024-06-07 14:36:10.650389] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.116 [2024-06-07 14:36:10.650394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e4a70 is same with the state(5) to be set 00:33:47.116 14:36:10 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:50.417 14:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.417 [2024-06-07 14:36:13.823499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.417 14:36:13 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:51.358 14:36:14 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:51.358 [2024-06-07 14:36:14.997766] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997799] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997805] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997809] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997819] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997833] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997841] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997855] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997872] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997877] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997885] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997890] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997903] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997912] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997916] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997929] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.358 [2024-06-07 14:36:14.997938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e5a10 is same with the state(5) to be set 00:33:51.620 14:36:15 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 742941 00:33:58.260 0 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 742598 ']' 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 742598' 00:33:58.260 killing process with pid 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 742598 00:33:58.260 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:58.260 [2024-06-07 14:36:04.615923] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:33:58.260 [2024-06-07 14:36:04.616009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742598 ] 00:33:58.260 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.260 [2024-06-07 14:36:04.685635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.260 [2024-06-07 14:36:04.717171] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.260 Running I/O for 15 seconds... 00:33:58.260 [2024-06-07 14:36:07.178011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.260 [2024-06-07 14:36:07.178170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.260 [2024-06-07 14:36:07.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.261 [2024-06-07 14:36:07.178835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.261 [2024-06-07 14:36:07.178844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.178990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.178997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.262 [2024-06-07 14:36:07.179496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.262 [2024-06-07 14:36:07.179503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.263 [2024-06-07 14:36:07.179519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.263 [2024-06-07 14:36:07.179535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.263 [2024-06-07 14:36:07.179551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.263 [2024-06-07 14:36:07.179566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.263 [2024-06-07 14:36:07.179582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.263 [2024-06-07 14:36:07.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.263 [2024-06-07 14:36:07.179614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.263 [2024-06-07 14:36:07.179631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.263 [2024-06-07 14:36:07.179648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.179676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.179684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.263 [2024-06-07 14:36:07.179733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.263 [2024-06-07 14:36:07.179748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.263 [2024-06-07 14:36:07.179763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.263 [2024-06-07 14:36:07.179779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.179786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816790 is same with the state(5) to be set 00:33:58.263 [2024-06-07 14:36:07.180003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97544 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97552 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97560 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97568 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97576 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97584 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97592 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97600 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97608 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97616 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97624 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97640 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.263 [2024-06-07 14:36:07.180369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97648 len:8 PRP1 0x0 PRP2 0x0 00:33:58.263 [2024-06-07 14:36:07.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.263 [2024-06-07 14:36:07.180384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.263 [2024-06-07 14:36:07.180390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97656 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97664 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97672 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97680 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98520 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97688 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97696 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97704 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97712 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97720 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97728 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97736 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97744 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.180738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.180746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.180753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97760 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97768 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97776 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97792 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97816 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97824 len:8 PRP1 0x0 PRP2 0x0 00:33:58.264 [2024-06-07 14:36:07.190803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.264 [2024-06-07 14:36:07.190811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.264 [2024-06-07 14:36:07.190817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.264 [2024-06-07 14:36:07.190823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97832 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97840 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97848 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97856 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97864 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97872 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.190972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.190977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.190984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.190992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.265 [2024-06-07 14:36:07.191282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:33:58.265 [2024-06-07 14:36:07.191289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.265 [2024-06-07 14:36:07.191296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.265 [2024-06-07 14:36:07.191302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98008 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98144 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98152 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.266 [2024-06-07 14:36:07.191911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.266 [2024-06-07 14:36:07.191916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.266 [2024-06-07 14:36:07.191922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98160 len:8 PRP1 0x0 PRP2 0x0 00:33:58.266 [2024-06-07 14:36:07.191929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.191936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.191941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.191947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.191954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.191961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.191967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.191973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98176 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.191980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.191987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.191993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98184 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98192 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98200 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98208 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98216 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98232 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98240 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98248 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.192233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.192239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98256 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.192246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.192253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.199867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.199895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98264 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.199906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.199917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.199923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.199930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.199937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.199945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.199950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.199956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.199963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.199970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.199975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.199981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.199988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.199995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.267 [2024-06-07 14:36:07.200162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:33:58.267 [2024-06-07 14:36:07.200169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.267 [2024-06-07 14:36:07.200177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.267 [2024-06-07 14:36:07.200183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98408 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98416 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98424 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98432 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98496 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98512 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.268 [2024-06-07 14:36:07.200780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.268 [2024-06-07 14:36:07.200785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.268 [2024-06-07 14:36:07.200790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97520 len:8 PRP1 0x0 PRP2 0x0 00:33:58.268 [2024-06-07 14:36:07.200797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:07.200805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.269 [2024-06-07 14:36:07.200810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.269 [2024-06-07 14:36:07.200816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97528 len:8 PRP1 0x0 PRP2 0x0 00:33:58.269 [2024-06-07 14:36:07.200823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:07.200830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.269 [2024-06-07 14:36:07.200836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.269 [2024-06-07 14:36:07.200842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97536 len:8 PRP1 0x0 PRP2 0x0 00:33:58.269 [2024-06-07 14:36:07.200849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:07.200885] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x835f40 was disconnected and freed. reset controller. 00:33:58.269 [2024-06-07 14:36:07.200895] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:58.269 [2024-06-07 14:36:07.200904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.269 [2024-06-07 14:36:07.200948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816790 (9): Bad file descriptor 00:33:58.269 [2024-06-07 14:36:07.204487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.269 [2024-06-07 14:36:07.279577] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:58.269 [2024-06-07 14:36:10.651471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.269 [2024-06-07 14:36:10.651943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.269 [2024-06-07 14:36:10.651951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.651958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.651968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.651975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.651983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.651990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.651999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.270 [2024-06-07 14:36:10.652121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.270 [2024-06-07 14:36:10.652474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.270 [2024-06-07 14:36:10.652482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.652790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.652983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.652993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.653001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.271 [2024-06-07 14:36:10.653034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.653050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.653067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.271 [2024-06-07 14:36:10.653084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.271 [2024-06-07 14:36:10.653093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.272 [2024-06-07 14:36:10.653461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:10.653575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.272 [2024-06-07 14:36:10.653604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.272 [2024-06-07 14:36:10.653610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34704 len:8 PRP1 0x0 PRP2 0x0 00:33:58.272 [2024-06-07 14:36:10.653618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653653] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x838180 was disconnected and freed. reset controller. 00:33:58.272 [2024-06-07 14:36:10.653662] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:58.272 [2024-06-07 14:36:10.653680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.272 [2024-06-07 14:36:10.653688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.272 [2024-06-07 14:36:10.653704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.272 [2024-06-07 14:36:10.653718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.272 [2024-06-07 14:36:10.653734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:10.653741] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.272 [2024-06-07 14:36:10.657283] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.272 [2024-06-07 14:36:10.657307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816790 (9): Bad file descriptor 00:33:58.272 [2024-06-07 14:36:10.733743] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:58.272 [2024-06-07 14:36:14.998568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:14.998603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.272 [2024-06-07 14:36:14.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.272 [2024-06-07 14:36:14.998628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.273 [2024-06-07 14:36:14.998731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.273 [2024-06-07 14:36:14.998746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.998990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.998999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.273 [2024-06-07 14:36:14.999279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.273 [2024-06-07 14:36:14.999286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.274 [2024-06-07 14:36:14.999829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.274 [2024-06-07 14:36:14.999835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:14.999990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:14.999997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.275 [2024-06-07 14:36:15.000477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.275 [2024-06-07 14:36:15.000486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.276 [2024-06-07 14:36:15.000672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:58.276 [2024-06-07 14:36:15.000701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:58.276 [2024-06-07 14:36:15.000708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:33:58.276 [2024-06-07 14:36:15.000718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000754] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x83a170 was disconnected and freed. reset controller. 00:33:58.276 [2024-06-07 14:36:15.000764] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:58.276 [2024-06-07 14:36:15.000783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.276 [2024-06-07 14:36:15.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.276 [2024-06-07 14:36:15.000806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.276 [2024-06-07 14:36:15.000821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.276 [2024-06-07 14:36:15.000836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.276 [2024-06-07 14:36:15.000843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:58.276 [2024-06-07 14:36:15.004393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:58.276 [2024-06-07 14:36:15.004417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x816790 (9): Bad file descriptor 00:33:58.276 [2024-06-07 14:36:15.081009] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:58.276 00:33:58.276 Latency(us) 00:33:58.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.276 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:58.276 Verification LBA range: start 0x0 length 0x4000 00:33:58.276 NVMe0n1 : 15.00 11314.64 44.20 523.96 0.00 10784.56 785.07 28398.93 00:33:58.276 =================================================================================================================== 00:33:58.276 Total : 11314.64 44.20 523.96 0.00 10784.56 785.07 28398.93 00:33:58.276 Received shutdown signal, test time was about 15.000000 seconds 00:33:58.276 00:33:58.276 Latency(us) 00:33:58.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.276 =================================================================================================================== 00:33:58.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=746112 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 746112 /var/tmp/bdevperf.sock 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 746112 ']' 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:58.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:33:58.276 14:36:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:58.537 14:36:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:33:58.537 14:36:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:33:58.537 14:36:22 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:58.798 [2024-06-07 14:36:22.314948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:58.798 14:36:22 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:59.058 [2024-06-07 14:36:22.471338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:59.058 14:36:22 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.058 NVMe0n1 00:33:59.318 14:36:22 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.579 00:33:59.579 14:36:23 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.840 00:34:00.102 14:36:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:00.102 14:36:23 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:00.102 14:36:23 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:00.362 14:36:23 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:03.664 14:36:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.664 14:36:26 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:03.664 14:36:27 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=747298 00:34:03.664 14:36:27 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 747298 00:34:03.664 14:36:27 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:04.605 0 00:34:04.605 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:04.605 [2024-06-07 14:36:21.405748] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:34:04.605 [2024-06-07 14:36:21.405806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746112 ] 00:34:04.605 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.605 [2024-06-07 14:36:21.470022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.605 [2024-06-07 14:36:21.499978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.605 [2024-06-07 14:36:23.819759] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:04.605 [2024-06-07 14:36:23.819804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.605 [2024-06-07 14:36:23.819815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.605 [2024-06-07 14:36:23.819824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.605 [2024-06-07 14:36:23.819831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.605 [2024-06-07 14:36:23.819839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.605 [2024-06-07 14:36:23.819846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.605 [2024-06-07 14:36:23.819853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.605 [2024-06-07 14:36:23.819860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.605 [2024-06-07 14:36:23.819867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.606 [2024-06-07 14:36:23.819891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.606 [2024-06-07 14:36:23.819906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab5790 (9): Bad file descriptor 00:34:04.606 [2024-06-07 14:36:23.823366] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:04.606 Running I/O for 1 seconds... 00:34:04.606 00:34:04.606 Latency(us) 00:34:04.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.606 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:04.606 Verification LBA range: start 0x0 length 0x4000 00:34:04.606 NVMe0n1 : 1.01 11213.31 43.80 0.00 0.00 11356.55 2539.52 14745.60 00:34:04.606 =================================================================================================================== 00:34:04.606 Total : 11213.31 43.80 0.00 0.00 11356.55 2539.52 14745.60 00:34:04.606 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:04.606 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:04.866 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:04.866 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:04.866 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:05.126 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:05.386 14:36:28 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:08.686 14:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:08.686 14:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 746112 ']' 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 746112' 00:34:08.686 killing process with pid 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 746112 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:08.686 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:08.947 rmmod nvme_tcp 00:34:08.947 rmmod nvme_fabrics 00:34:08.947 rmmod nvme_keyring 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 742246 ']' 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 742246 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 742246 ']' 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 742246 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 742246 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 742246' 00:34:08.947 killing process with pid 742246 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 742246 00:34:08.947 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 742246 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:09.207 14:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.116 14:36:34 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:11.116 00:34:11.116 real 0m39.647s 00:34:11.116 user 2m0.761s 00:34:11.116 sys 0m8.345s 00:34:11.116 14:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:11.116 14:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:11.116 ************************************ 00:34:11.116 END TEST nvmf_failover 00:34:11.116 ************************************ 00:34:11.116 14:36:34 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:11.116 14:36:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:11.116 14:36:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:11.116 14:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:11.116 ************************************ 00:34:11.116 START TEST nvmf_host_discovery 00:34:11.116 ************************************ 00:34:11.116 14:36:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:11.376 * Looking for test storage... 00:34:11.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:11.376 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:34:11.377 14:36:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:34:19.510 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:19.511 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:19.511 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:19.511 Found net devices under 0000:31:00.0: cvl_0_0 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:19.511 Found net devices under 0000:31:00.1: cvl_0_1 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.511 14:36:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.511 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:19.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:34:19.512 00:34:19.512 --- 10.0.0.2 ping statistics --- 00:34:19.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.512 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:34:19.512 00:34:19.512 --- 10.0.0.1 ping statistics --- 00:34:19.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.512 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=752786 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 752786 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 752786 ']' 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:19.512 14:36:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 [2024-06-07 14:36:42.280302] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:34:19.512 [2024-06-07 14:36:42.280352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.512 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.512 [2024-06-07 14:36:42.368133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.512 [2024-06-07 14:36:42.401922] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.512 [2024-06-07 14:36:42.401964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.512 [2024-06-07 14:36:42.401971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.512 [2024-06-07 14:36:42.401978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.512 [2024-06-07 14:36:42.401984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.512 [2024-06-07 14:36:42.402013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 [2024-06-07 14:36:43.079136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 [2024-06-07 14:36:43.087295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 null0 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 null1 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=752994 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 752994 /tmp/host.sock 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 752994 ']' 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:19.512 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:19.512 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.811 [2024-06-07 14:36:43.163933] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:34:19.811 [2024-06-07 14:36:43.163983] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid752994 ] 00:34:19.811 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.811 [2024-06-07 14:36:43.227077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.811 [2024-06-07 14:36:43.258752] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.380 14:36:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:20.380 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.641 [2024-06-07 14:36:44.278323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.641 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:34:20.902 14:36:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:34:21.474 [2024-06-07 14:36:44.987191] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:21.474 [2024-06-07 14:36:44.987215] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:21.474 [2024-06-07 14:36:44.987228] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:21.474 [2024-06-07 14:36:45.116706] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:21.734 [2024-06-07 14:36:45.219452] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:21.734 [2024-06-07 14:36:45.219476] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:21.995 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:21.995 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:21.995 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:21.995 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:21.995 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:21.996 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:22.257 14:36:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.518 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 [2024-06-07 14:36:46.067055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:22.519 [2024-06-07 14:36:46.067607] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:22.519 [2024-06-07 14:36:46.067632] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.779 [2024-06-07 14:36:46.196434] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:22.779 14:36:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:34:23.040 [2024-06-07 14:36:46.458755] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:23.040 [2024-06-07 14:36:46.458776] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:23.040 [2024-06-07 14:36:46.458782] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:23.611 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.873 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.873 [2024-06-07 14:36:47.334429] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:23.873 [2024-06-07 14:36:47.334450] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:23.874 [2024-06-07 14:36:47.341599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.874 [2024-06-07 14:36:47.341616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.874 [2024-06-07 14:36:47.341626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.874 [2024-06-07 14:36:47.341633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.874 [2024-06-07 14:36:47.341641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.874 [2024-06-07 14:36:47.341648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.874 [2024-06-07 14:36:47.341656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:23.874 [2024-06-07 14:36:47.341663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:23.874 [2024-06-07 14:36:47.341671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:23.874 [2024-06-07 14:36:47.351613] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.874 [2024-06-07 14:36:47.361651] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.874 [2024-06-07 14:36:47.361961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-06-07 14:36:47.361975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.874 [2024-06-07 14:36:47.361983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 [2024-06-07 14:36:47.361995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 [2024-06-07 14:36:47.362006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.874 [2024-06-07 14:36:47.362013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.874 [2024-06-07 14:36:47.362021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.874 [2024-06-07 14:36:47.362033] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.874 [2024-06-07 14:36:47.371704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.874 [2024-06-07 14:36:47.372034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-06-07 14:36:47.372046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.874 [2024-06-07 14:36:47.372054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 [2024-06-07 14:36:47.372065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 [2024-06-07 14:36:47.372076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.874 [2024-06-07 14:36:47.372082] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.874 [2024-06-07 14:36:47.372089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.874 [2024-06-07 14:36:47.372100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:23.874 [2024-06-07 14:36:47.381757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.874 [2024-06-07 14:36:47.382068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-06-07 14:36:47.382080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.874 [2024-06-07 14:36:47.382088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 [2024-06-07 14:36:47.382099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 [2024-06-07 14:36:47.382109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.874 [2024-06-07 14:36:47.382116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.874 [2024-06-07 14:36:47.382123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.874 [2024-06-07 14:36:47.382133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:23.874 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:23.874 [2024-06-07 14:36:47.391811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.874 [2024-06-07 14:36:47.392110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-06-07 14:36:47.392127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.874 [2024-06-07 14:36:47.392134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 [2024-06-07 14:36:47.392145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 [2024-06-07 14:36:47.392156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.874 [2024-06-07 14:36:47.392162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.874 [2024-06-07 14:36:47.392169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.874 [2024-06-07 14:36:47.392179] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.874 [2024-06-07 14:36:47.401865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.874 [2024-06-07 14:36:47.402120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-06-07 14:36:47.402131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.874 [2024-06-07 14:36:47.402139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.874 [2024-06-07 14:36:47.402149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.874 [2024-06-07 14:36:47.402159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.874 [2024-06-07 14:36:47.402165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.874 [2024-06-07 14:36:47.402173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.875 [2024-06-07 14:36:47.402183] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.875 [2024-06-07 14:36:47.411915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:23.875 [2024-06-07 14:36:47.412136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-06-07 14:36:47.412146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960df0 with addr=10.0.0.2, port=4420 00:34:23.875 [2024-06-07 14:36:47.412154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960df0 is same with the state(5) to be set 00:34:23.875 [2024-06-07 14:36:47.412165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960df0 (9): Bad file descriptor 00:34:23.875 [2024-06-07 14:36:47.412175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:23.875 [2024-06-07 14:36:47.412181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:23.875 [2024-06-07 14:36:47.412188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:23.875 [2024-06-07 14:36:47.412203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.875 [2024-06-07 14:36:47.421707] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:23.875 [2024-06-07 14:36:47.421725] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:23.875 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:24.136 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:24.137 14:36:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.521 [2024-06-07 14:36:48.777145] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:25.521 [2024-06-07 14:36:48.777165] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:25.521 [2024-06-07 14:36:48.777177] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:25.521 [2024-06-07 14:36:48.905607] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:25.521 [2024-06-07 14:36:49.011556] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:25.521 [2024-06-07 14:36:49.011586] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.521 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.521 request: 00:34:25.521 { 00:34:25.521 "name": "nvme", 00:34:25.521 "trtype": "tcp", 00:34:25.521 "traddr": "10.0.0.2", 00:34:25.521 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:25.521 "adrfam": "ipv4", 00:34:25.521 "trsvcid": "8009", 00:34:25.521 "wait_for_attach": true, 00:34:25.521 "method": "bdev_nvme_start_discovery", 00:34:25.521 "req_id": 1 00:34:25.521 } 00:34:25.521 Got JSON-RPC error response 00:34:25.521 response: 00:34:25.521 { 00:34:25.521 "code": -17, 00:34:25.521 "message": "File exists" 00:34:25.521 } 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.522 request: 00:34:25.522 { 00:34:25.522 "name": "nvme_second", 00:34:25.522 "trtype": "tcp", 00:34:25.522 "traddr": "10.0.0.2", 00:34:25.522 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:25.522 "adrfam": "ipv4", 00:34:25.522 "trsvcid": "8009", 00:34:25.522 "wait_for_attach": true, 00:34:25.522 "method": "bdev_nvme_start_discovery", 00:34:25.522 "req_id": 1 00:34:25.522 } 00:34:25.522 Got JSON-RPC error response 00:34:25.522 response: 00:34:25.522 { 00:34:25.522 "code": -17, 00:34:25.522 "message": "File exists" 00:34:25.522 } 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:25.522 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:25.782 14:36:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.723 [2024-06-07 14:36:50.288427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.723 [2024-06-07 14:36:50.288473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95cea0 with addr=10.0.0.2, port=8010 00:34:26.723 [2024-06-07 14:36:50.288489] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:26.723 [2024-06-07 14:36:50.288496] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:26.723 [2024-06-07 14:36:50.288503] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:27.663 [2024-06-07 14:36:51.290678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:27.663 [2024-06-07 14:36:51.290704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95cea0 with addr=10.0.0.2, port=8010 00:34:27.663 [2024-06-07 14:36:51.290716] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:27.663 [2024-06-07 14:36:51.290723] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:27.663 [2024-06-07 14:36:51.290729] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:29.044 [2024-06-07 14:36:52.292669] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:29.044 request: 00:34:29.044 { 00:34:29.044 "name": "nvme_second", 00:34:29.044 "trtype": "tcp", 00:34:29.044 "traddr": "10.0.0.2", 00:34:29.044 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:29.044 "adrfam": "ipv4", 00:34:29.044 "trsvcid": "8010", 00:34:29.044 "attach_timeout_ms": 3000, 00:34:29.044 "method": "bdev_nvme_start_discovery", 00:34:29.044 "req_id": 1 00:34:29.044 } 00:34:29.044 Got JSON-RPC error response 00:34:29.044 response: 00:34:29.044 { 00:34:29.044 "code": -110, 00:34:29.044 "message": "Connection timed out" 00:34:29.044 } 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 752994 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:29.044 rmmod nvme_tcp 00:34:29.044 rmmod nvme_fabrics 00:34:29.044 rmmod nvme_keyring 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 752786 ']' 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 752786 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 752786 ']' 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 752786 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 752786 00:34:29.044 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 752786' 00:34:29.045 killing process with pid 752786 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 752786 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 752786 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:29.045 14:36:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:31.583 00:34:31.583 real 0m19.919s 00:34:31.583 user 0m23.302s 00:34:31.583 sys 0m6.801s 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.583 ************************************ 00:34:31.583 END TEST nvmf_host_discovery 00:34:31.583 ************************************ 00:34:31.583 14:36:54 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:31.583 14:36:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:34:31.583 14:36:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:34:31.583 14:36:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:31.583 ************************************ 00:34:31.583 START TEST nvmf_host_multipath_status 00:34:31.583 ************************************ 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:31.583 * Looking for test storage... 00:34:31.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.583 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:34:31.584 14:36:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:39.723 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:39.724 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:39.724 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:39.724 Found net devices under 0000:31:00.0: cvl_0_0 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:39.724 Found net devices under 0000:31:00.1: cvl_0_1 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:39.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:34:39.724 00:34:39.724 --- 10.0.0.2 ping statistics --- 00:34:39.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.724 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:34:39.724 00:34:39.724 --- 10.0.0.1 ping statistics --- 00:34:39.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.724 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:39.724 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=759562 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 759562 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 759562 ']' 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:39.725 14:37:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:39.725 [2024-06-07 14:37:02.916366] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:34:39.725 [2024-06-07 14:37:02.916418] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.725 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.725 [2024-06-07 14:37:02.986042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:39.725 [2024-06-07 14:37:03.017794] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.725 [2024-06-07 14:37:03.017831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.725 [2024-06-07 14:37:03.017839] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.725 [2024-06-07 14:37:03.017845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.725 [2024-06-07 14:37:03.017851] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.725 [2024-06-07 14:37:03.018029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.725 [2024-06-07 14:37:03.018030] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=759562 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:40.296 [2024-06-07 14:37:03.870840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.296 14:37:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:40.557 Malloc0 00:34:40.557 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:40.557 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:40.818 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.078 [2024-06-07 14:37:04.488889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.078 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:41.079 [2024-06-07 14:37:04.641285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=759917 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 759917 /var/tmp/bdevperf.sock 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 759917 ']' 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:41.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:34:41.079 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:41.378 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:34:41.378 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:34:41.378 14:37:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:41.637 14:37:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:34:41.897 Nvme0n1 00:34:41.897 14:37:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:42.157 Nvme0n1 00:34:42.157 14:37:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:42.157 14:37:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:44.697 14:37:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:44.697 14:37:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:44.697 14:37:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:44.697 14:37:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.636 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:45.896 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:45.896 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:45.896 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:45.896 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.157 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:46.417 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.417 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:46.418 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.418 14:37:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:46.678 14:37:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.678 14:37:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:46.678 14:37:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:46.678 14:37:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:46.938 14:37:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:47.879 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:47.879 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:47.879 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.879 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.141 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:48.401 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.401 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:48.402 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.402 14:37:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:48.663 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:48.923 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:48.923 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:48.923 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:49.183 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:49.183 14:37:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.568 14:37:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:50.568 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:50.568 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:50.568 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.568 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:50.829 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.830 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:51.090 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.090 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:51.090 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.090 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:51.351 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.351 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:51.351 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:51.351 14:37:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:51.612 14:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:52.555 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:52.555 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.555 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.555 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.816 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.816 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:52.816 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.816 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.077 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:53.338 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.338 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:53.338 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.338 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.598 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.598 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:53.598 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.598 14:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.598 14:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:53.598 14:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:53.598 14:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:53.857 14:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:53.857 14:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.237 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.497 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.497 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.497 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.497 14:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.756 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:56.015 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:56.016 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:56.016 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:56.275 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:56.275 14:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:57.277 14:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:57.277 14:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:57.277 14:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.277 14:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.539 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.539 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:57.539 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.539 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.799 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.058 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.058 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:58.058 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.058 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.318 14:37:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:58.578 14:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:58.578 14:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:58.578 14:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:58.838 14:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:59.776 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:59.776 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.776 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.776 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.036 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.036 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:00.036 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.036 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.296 14:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.556 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.556 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.556 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.556 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:00.817 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:01.078 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:01.338 14:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.280 14:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.540 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.540 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.540 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.540 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.800 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.061 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.061 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.061 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.061 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.321 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.321 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:03.322 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:03.322 14:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:03.582 14:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:04.522 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:04.522 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:04.522 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.522 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:04.782 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:04.782 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:04.782 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:04.782 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.043 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.303 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.303 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:05.303 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.303 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:05.564 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.564 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:05.564 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.564 14:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:05.564 14:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.564 14:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:05.564 14:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:05.825 14:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:06.086 14:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.027 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:07.286 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:07.286 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:07.286 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.286 14:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.547 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:07.808 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.808 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:07.808 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.808 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 759917 ']' 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 759917' 00:35:08.072 killing process with pid 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 759917 00:35:08.072 Connection closed with partial response: 00:35:08.072 00:35:08.072 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 759917 00:35:08.072 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:08.072 [2024-06-07 14:37:04.702271] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:35:08.072 [2024-06-07 14:37:04.702326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759917 ] 00:35:08.072 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.072 [2024-06-07 14:37:04.757331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.072 [2024-06-07 14:37:04.785307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.072 Running I/O for 90 seconds... 00:35:08.072 [2024-06-07 14:37:17.297188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.072 [2024-06-07 14:37:17.297225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.297495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.072 [2024-06-07 14:37:17.297501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:08.072 [2024-06-07 14:37:17.298636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.298998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.073 [2024-06-07 14:37:17.299319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:08.073 [2024-06-07 14:37:17.299332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.299986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.299991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.074 [2024-06-07 14:37:17.300012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.074 [2024-06-07 14:37:17.300033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.074 [2024-06-07 14:37:17.300053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.074 [2024-06-07 14:37:17.300073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.074 [2024-06-07 14:37:17.300093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:08.074 [2024-06-07 14:37:17.300109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.074 [2024-06-07 14:37:17.300113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.075 [2024-06-07 14:37:17.300134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.075 [2024-06-07 14:37:17.300154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:17.300909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:17.300914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:29.467507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:29.467542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:29.467571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:29.467577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:29.467588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:29.467593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:08.075 [2024-06-07 14:37:29.467603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.075 [2024-06-07 14:37:29.467608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.467985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.467990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.468005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.468020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.468035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:08.076 [2024-06-07 14:37:29.468221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:08.076 [2024-06-07 14:37:29.468247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.076 [2024-06-07 14:37:29.468252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:08.077 [2024-06-07 14:37:29.468340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.077 [2024-06-07 14:37:29.468346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:08.077 Received shutdown signal, test time was about 25.658791 seconds 00:35:08.077 00:35:08.077 Latency(us) 00:35:08.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.077 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:08.077 Verification LBA range: start 0x0 length 0x4000 00:35:08.077 Nvme0n1 : 25.66 10947.07 42.76 0.00 0.00 11674.77 349.87 3019898.88 00:35:08.077 =================================================================================================================== 00:35:08.077 Total : 10947.07 42.76 0.00 0.00 11674.77 349.87 3019898.88 00:35:08.077 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:08.338 rmmod nvme_tcp 00:35:08.338 rmmod nvme_fabrics 00:35:08.338 rmmod nvme_keyring 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 759562 ']' 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 759562 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 759562 ']' 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 759562 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 759562 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 759562' 00:35:08.338 killing process with pid 759562 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 759562 00:35:08.338 14:37:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 759562 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.599 14:37:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.511 14:37:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:10.511 00:35:10.511 real 0m39.382s 00:35:10.511 user 1m39.484s 00:35:10.511 sys 0m11.160s 00:35:10.511 14:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:10.511 14:37:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:10.511 ************************************ 00:35:10.511 END TEST nvmf_host_multipath_status 00:35:10.511 ************************************ 00:35:10.772 14:37:34 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:10.772 14:37:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:10.772 14:37:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:10.772 14:37:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:10.772 ************************************ 00:35:10.772 START TEST nvmf_discovery_remove_ifc 00:35:10.772 ************************************ 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:10.772 * Looking for test storage... 00:35:10.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:10.772 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:10.773 14:37:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:35:18.977 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:18.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:18.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:18.978 Found net devices under 0000:31:00.0: cvl_0_0 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:18.978 Found net devices under 0000:31:00.1: cvl_0_1 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:18.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:18.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:35:18.978 00:35:18.978 --- 10.0.0.2 ping statistics --- 00:35:18.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.978 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:18.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:18.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:35:18.978 00:35:18.978 --- 10.0.0.1 ping statistics --- 00:35:18.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:18.978 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=769865 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 769865 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 769865 ']' 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.978 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:18.979 14:37:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:18.979 [2024-06-07 14:37:42.623236] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:35:18.979 [2024-06-07 14:37:42.623298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.239 EAL: No free 2048 kB hugepages reported on node 1 00:35:19.239 [2024-06-07 14:37:42.718257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.239 [2024-06-07 14:37:42.764753] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.239 [2024-06-07 14:37:42.764809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.239 [2024-06-07 14:37:42.764817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.239 [2024-06-07 14:37:42.764823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.239 [2024-06-07 14:37:42.764829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.240 [2024-06-07 14:37:42.764856] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.810 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.070 [2024-06-07 14:37:43.466279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.070 [2024-06-07 14:37:43.474525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:20.070 null0 00:35:20.070 [2024-06-07 14:37:43.506466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=769954 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 769954 /tmp/host.sock 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 769954 ']' 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:20.070 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:20.070 14:37:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.070 [2024-06-07 14:37:43.582001] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:35:20.070 [2024-06-07 14:37:43.582066] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769954 ] 00:35:20.070 EAL: No free 2048 kB hugepages reported on node 1 00:35:20.070 [2024-06-07 14:37:43.652049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.070 [2024-06-07 14:37:43.692082] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.011 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.012 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:21.012 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.012 14:37:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.953 [2024-06-07 14:37:45.474597] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:21.953 [2024-06-07 14:37:45.474617] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:21.954 [2024-06-07 14:37:45.474629] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:22.213 [2024-06-07 14:37:45.603040] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:22.213 [2024-06-07 14:37:45.788912] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:22.213 [2024-06-07 14:37:45.788964] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:22.214 [2024-06-07 14:37:45.788986] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:22.214 [2024-06-07 14:37:45.788999] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:22.214 [2024-06-07 14:37:45.789019] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.214 [2024-06-07 14:37:45.794151] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x198c8c0 was disconnected and freed. delete nvme_qpair. 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:22.214 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:22.474 14:37:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.474 14:37:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:22.474 14:37:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.417 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.678 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:23.678 14:37:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:24.620 14:37:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:25.563 14:37:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:26.947 14:37:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:27.890 [2024-06-07 14:37:51.239466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:27.890 [2024-06-07 14:37:51.239510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.890 [2024-06-07 14:37:51.239521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.890 [2024-06-07 14:37:51.239531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.890 [2024-06-07 14:37:51.239538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.890 [2024-06-07 14:37:51.239547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.890 [2024-06-07 14:37:51.239554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.891 [2024-06-07 14:37:51.239562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.891 [2024-06-07 14:37:51.239569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.891 [2024-06-07 14:37:51.239577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:27.891 [2024-06-07 14:37:51.239584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:27.891 [2024-06-07 14:37:51.239591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953990 is same with the state(5) to be set 00:35:27.891 [2024-06-07 14:37:51.249487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1953990 (9): Bad file descriptor 00:35:27.891 [2024-06-07 14:37:51.259527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.891 14:37:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:28.831 [2024-06-07 14:37:52.283219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:28.831 [2024-06-07 14:37:52.283258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1953990 with addr=10.0.0.2, port=4420 00:35:28.831 [2024-06-07 14:37:52.283269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953990 is same with the state(5) to be set 00:35:28.831 [2024-06-07 14:37:52.283291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1953990 (9): Bad file descriptor 00:35:28.831 [2024-06-07 14:37:52.283627] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:35:28.831 [2024-06-07 14:37:52.283644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:35:28.831 [2024-06-07 14:37:52.283651] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:35:28.831 [2024-06-07 14:37:52.283660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:35:28.831 [2024-06-07 14:37:52.283675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:28.831 [2024-06-07 14:37:52.283682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:35:28.831 14:37:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.831 14:37:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:28.831 14:37:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:29.770 [2024-06-07 14:37:53.286061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:29.770 [2024-06-07 14:37:53.286093] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:29.770 [2024-06-07 14:37:53.286114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:29.770 [2024-06-07 14:37:53.286124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.770 [2024-06-07 14:37:53.286133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:29.770 [2024-06-07 14:37:53.286140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.770 [2024-06-07 14:37:53.286149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:29.770 [2024-06-07 14:37:53.286156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.770 [2024-06-07 14:37:53.286163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:29.770 [2024-06-07 14:37:53.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.770 [2024-06-07 14:37:53.286179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:29.770 [2024-06-07 14:37:53.286186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:29.770 [2024-06-07 14:37:53.286193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:35:29.770 [2024-06-07 14:37:53.286659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952e20 (9): Bad file descriptor 00:35:29.770 [2024-06-07 14:37:53.287669] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:29.770 [2024-06-07 14:37:53.287679] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.770 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:30.030 14:37:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:30.969 14:37:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.907 [2024-06-07 14:37:55.341313] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:31.907 [2024-06-07 14:37:55.341333] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:31.907 [2024-06-07 14:37:55.341346] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:31.907 [2024-06-07 14:37:55.469792] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:32.167 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:32.168 14:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.168 [2024-06-07 14:37:55.652086] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:32.168 [2024-06-07 14:37:55.652127] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:32.168 [2024-06-07 14:37:55.652148] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:32.168 [2024-06-07 14:37:55.652163] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:32.168 [2024-06-07 14:37:55.652173] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:32.168 [2024-06-07 14:37:55.658342] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19972c0 was disconnected and freed. delete nvme_qpair. 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 769954 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 769954 ']' 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 769954 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:33.106 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 769954 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 769954' 00:35:33.366 killing process with pid 769954 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 769954 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 769954 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:33.366 rmmod nvme_tcp 00:35:33.366 rmmod nvme_fabrics 00:35:33.366 rmmod nvme_keyring 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 769865 ']' 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 769865 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 769865 ']' 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 769865 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:33.366 14:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 769865 00:35:33.366 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:35:33.366 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:35:33.366 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 769865' 00:35:33.366 killing process with pid 769865 00:35:33.366 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 769865 00:35:33.366 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 769865 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:33.632 14:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.584 14:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:35.584 00:35:35.584 real 0m24.979s 00:35:35.584 user 0m29.623s 00:35:35.584 sys 0m7.416s 00:35:35.584 14:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:35.584 14:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.584 ************************************ 00:35:35.584 END TEST nvmf_discovery_remove_ifc 00:35:35.584 ************************************ 00:35:35.584 14:37:59 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:35.584 14:37:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:35.584 14:37:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:35.584 14:37:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:35.845 ************************************ 00:35:35.845 START TEST nvmf_identify_kernel_target 00:35:35.845 ************************************ 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:35.845 * Looking for test storage... 00:35:35.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:35:35.845 14:37:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:43.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:43.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:43.979 Found net devices under 0000:31:00.0: cvl_0_0 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:43.979 Found net devices under 0000:31:00.1: cvl_0_1 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.979 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:43.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:35:43.980 00:35:43.980 --- 10.0.0.2 ping statistics --- 00:35:43.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.980 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:35:43.980 00:35:43.980 --- 10.0.0.1 ping statistics --- 00:35:43.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.980 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:43.980 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:44.240 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:44.240 14:38:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:48.445 Waiting for block devices as requested 00:35:48.445 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:48.445 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:48.705 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:35:48.705 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:48.705 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:48.965 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:48.965 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:48.965 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:48.965 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:49.226 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:49.226 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:49.226 No valid GPT data, bailing 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:49.226 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:49.227 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:35:49.489 00:35:49.489 Discovery Log Number of Records 2, Generation counter 2 00:35:49.489 =====Discovery Log Entry 0====== 00:35:49.489 trtype: tcp 00:35:49.489 adrfam: ipv4 00:35:49.489 subtype: current discovery subsystem 00:35:49.489 treq: not specified, sq flow control disable supported 00:35:49.489 portid: 1 00:35:49.489 trsvcid: 4420 00:35:49.489 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:49.489 traddr: 10.0.0.1 00:35:49.489 eflags: none 00:35:49.489 sectype: none 00:35:49.489 =====Discovery Log Entry 1====== 00:35:49.489 trtype: tcp 00:35:49.489 adrfam: ipv4 00:35:49.489 subtype: nvme subsystem 00:35:49.489 treq: not specified, sq flow control disable supported 00:35:49.489 portid: 1 00:35:49.489 trsvcid: 4420 00:35:49.489 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:49.489 traddr: 10.0.0.1 00:35:49.489 eflags: none 00:35:49.489 sectype: none 00:35:49.489 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:49.489 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:49.489 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.489 ===================================================== 00:35:49.489 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:49.489 ===================================================== 00:35:49.489 Controller Capabilities/Features 00:35:49.489 ================================ 00:35:49.489 Vendor ID: 0000 00:35:49.489 Subsystem Vendor ID: 0000 00:35:49.489 Serial Number: 5f729859c5675b7f28e4 00:35:49.489 Model Number: Linux 00:35:49.489 Firmware Version: 6.7.0-68 00:35:49.489 Recommended Arb Burst: 0 00:35:49.489 IEEE OUI Identifier: 00 00 00 00:35:49.489 Multi-path I/O 00:35:49.489 May have multiple subsystem ports: No 00:35:49.489 May have multiple controllers: No 00:35:49.489 Associated with SR-IOV VF: No 00:35:49.489 Max Data Transfer Size: Unlimited 00:35:49.489 Max Number of Namespaces: 0 00:35:49.489 Max Number of I/O Queues: 1024 00:35:49.489 NVMe Specification Version (VS): 1.3 00:35:49.489 NVMe Specification Version (Identify): 1.3 00:35:49.489 Maximum Queue Entries: 1024 00:35:49.489 Contiguous Queues Required: No 00:35:49.489 Arbitration Mechanisms Supported 00:35:49.489 Weighted Round Robin: Not Supported 00:35:49.489 Vendor Specific: Not Supported 00:35:49.489 Reset Timeout: 7500 ms 00:35:49.489 Doorbell Stride: 4 bytes 00:35:49.489 NVM Subsystem Reset: Not Supported 00:35:49.489 Command Sets Supported 00:35:49.489 NVM Command Set: Supported 00:35:49.489 Boot Partition: Not Supported 00:35:49.489 Memory Page Size Minimum: 4096 bytes 00:35:49.489 Memory Page Size Maximum: 4096 bytes 00:35:49.489 Persistent Memory Region: Not Supported 00:35:49.489 Optional Asynchronous Events Supported 00:35:49.489 Namespace Attribute Notices: Not Supported 00:35:49.489 Firmware Activation Notices: Not Supported 00:35:49.489 ANA Change Notices: Not Supported 00:35:49.489 PLE Aggregate Log Change Notices: Not Supported 00:35:49.489 LBA Status Info Alert Notices: Not Supported 00:35:49.489 EGE Aggregate Log Change Notices: Not Supported 00:35:49.489 Normal NVM Subsystem Shutdown event: Not Supported 00:35:49.489 Zone Descriptor Change Notices: Not Supported 00:35:49.489 Discovery Log Change Notices: Supported 00:35:49.489 Controller Attributes 00:35:49.489 128-bit Host Identifier: Not Supported 00:35:49.489 Non-Operational Permissive Mode: Not Supported 00:35:49.489 NVM Sets: Not Supported 00:35:49.489 Read Recovery Levels: Not Supported 00:35:49.489 Endurance Groups: Not Supported 00:35:49.489 Predictable Latency Mode: Not Supported 00:35:49.489 Traffic Based Keep ALive: Not Supported 00:35:49.489 Namespace Granularity: Not Supported 00:35:49.489 SQ Associations: Not Supported 00:35:49.489 UUID List: Not Supported 00:35:49.489 Multi-Domain Subsystem: Not Supported 00:35:49.489 Fixed Capacity Management: Not Supported 00:35:49.489 Variable Capacity Management: Not Supported 00:35:49.489 Delete Endurance Group: Not Supported 00:35:49.489 Delete NVM Set: Not Supported 00:35:49.489 Extended LBA Formats Supported: Not Supported 00:35:49.489 Flexible Data Placement Supported: Not Supported 00:35:49.489 00:35:49.489 Controller Memory Buffer Support 00:35:49.489 ================================ 00:35:49.489 Supported: No 00:35:49.489 00:35:49.489 Persistent Memory Region Support 00:35:49.489 ================================ 00:35:49.489 Supported: No 00:35:49.489 00:35:49.489 Admin Command Set Attributes 00:35:49.489 ============================ 00:35:49.489 Security Send/Receive: Not Supported 00:35:49.489 Format NVM: Not Supported 00:35:49.489 Firmware Activate/Download: Not Supported 00:35:49.489 Namespace Management: Not Supported 00:35:49.489 Device Self-Test: Not Supported 00:35:49.489 Directives: Not Supported 00:35:49.489 NVMe-MI: Not Supported 00:35:49.489 Virtualization Management: Not Supported 00:35:49.489 Doorbell Buffer Config: Not Supported 00:35:49.489 Get LBA Status Capability: Not Supported 00:35:49.489 Command & Feature Lockdown Capability: Not Supported 00:35:49.489 Abort Command Limit: 1 00:35:49.489 Async Event Request Limit: 1 00:35:49.489 Number of Firmware Slots: N/A 00:35:49.489 Firmware Slot 1 Read-Only: N/A 00:35:49.489 Firmware Activation Without Reset: N/A 00:35:49.489 Multiple Update Detection Support: N/A 00:35:49.489 Firmware Update Granularity: No Information Provided 00:35:49.489 Per-Namespace SMART Log: No 00:35:49.489 Asymmetric Namespace Access Log Page: Not Supported 00:35:49.489 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:49.489 Command Effects Log Page: Not Supported 00:35:49.489 Get Log Page Extended Data: Supported 00:35:49.489 Telemetry Log Pages: Not Supported 00:35:49.489 Persistent Event Log Pages: Not Supported 00:35:49.489 Supported Log Pages Log Page: May Support 00:35:49.489 Commands Supported & Effects Log Page: Not Supported 00:35:49.489 Feature Identifiers & Effects Log Page:May Support 00:35:49.489 NVMe-MI Commands & Effects Log Page: May Support 00:35:49.489 Data Area 4 for Telemetry Log: Not Supported 00:35:49.489 Error Log Page Entries Supported: 1 00:35:49.489 Keep Alive: Not Supported 00:35:49.489 00:35:49.489 NVM Command Set Attributes 00:35:49.489 ========================== 00:35:49.489 Submission Queue Entry Size 00:35:49.489 Max: 1 00:35:49.489 Min: 1 00:35:49.489 Completion Queue Entry Size 00:35:49.489 Max: 1 00:35:49.489 Min: 1 00:35:49.489 Number of Namespaces: 0 00:35:49.489 Compare Command: Not Supported 00:35:49.489 Write Uncorrectable Command: Not Supported 00:35:49.489 Dataset Management Command: Not Supported 00:35:49.489 Write Zeroes Command: Not Supported 00:35:49.489 Set Features Save Field: Not Supported 00:35:49.489 Reservations: Not Supported 00:35:49.489 Timestamp: Not Supported 00:35:49.489 Copy: Not Supported 00:35:49.489 Volatile Write Cache: Not Present 00:35:49.489 Atomic Write Unit (Normal): 1 00:35:49.489 Atomic Write Unit (PFail): 1 00:35:49.489 Atomic Compare & Write Unit: 1 00:35:49.489 Fused Compare & Write: Not Supported 00:35:49.489 Scatter-Gather List 00:35:49.489 SGL Command Set: Supported 00:35:49.489 SGL Keyed: Not Supported 00:35:49.489 SGL Bit Bucket Descriptor: Not Supported 00:35:49.489 SGL Metadata Pointer: Not Supported 00:35:49.489 Oversized SGL: Not Supported 00:35:49.489 SGL Metadata Address: Not Supported 00:35:49.489 SGL Offset: Supported 00:35:49.489 Transport SGL Data Block: Not Supported 00:35:49.489 Replay Protected Memory Block: Not Supported 00:35:49.489 00:35:49.489 Firmware Slot Information 00:35:49.489 ========================= 00:35:49.489 Active slot: 0 00:35:49.489 00:35:49.489 00:35:49.489 Error Log 00:35:49.489 ========= 00:35:49.489 00:35:49.489 Active Namespaces 00:35:49.489 ================= 00:35:49.489 Discovery Log Page 00:35:49.489 ================== 00:35:49.489 Generation Counter: 2 00:35:49.489 Number of Records: 2 00:35:49.489 Record Format: 0 00:35:49.489 00:35:49.489 Discovery Log Entry 0 00:35:49.489 ---------------------- 00:35:49.489 Transport Type: 3 (TCP) 00:35:49.489 Address Family: 1 (IPv4) 00:35:49.489 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:49.489 Entry Flags: 00:35:49.489 Duplicate Returned Information: 0 00:35:49.489 Explicit Persistent Connection Support for Discovery: 0 00:35:49.489 Transport Requirements: 00:35:49.489 Secure Channel: Not Specified 00:35:49.490 Port ID: 1 (0x0001) 00:35:49.490 Controller ID: 65535 (0xffff) 00:35:49.490 Admin Max SQ Size: 32 00:35:49.490 Transport Service Identifier: 4420 00:35:49.490 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:49.490 Transport Address: 10.0.0.1 00:35:49.490 Discovery Log Entry 1 00:35:49.490 ---------------------- 00:35:49.490 Transport Type: 3 (TCP) 00:35:49.490 Address Family: 1 (IPv4) 00:35:49.490 Subsystem Type: 2 (NVM Subsystem) 00:35:49.490 Entry Flags: 00:35:49.490 Duplicate Returned Information: 0 00:35:49.490 Explicit Persistent Connection Support for Discovery: 0 00:35:49.490 Transport Requirements: 00:35:49.490 Secure Channel: Not Specified 00:35:49.490 Port ID: 1 (0x0001) 00:35:49.490 Controller ID: 65535 (0xffff) 00:35:49.490 Admin Max SQ Size: 32 00:35:49.490 Transport Service Identifier: 4420 00:35:49.490 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:49.490 Transport Address: 10.0.0.1 00:35:49.490 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.490 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.490 get_feature(0x01) failed 00:35:49.490 get_feature(0x02) failed 00:35:49.490 get_feature(0x04) failed 00:35:49.490 ===================================================== 00:35:49.490 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.490 ===================================================== 00:35:49.490 Controller Capabilities/Features 00:35:49.490 ================================ 00:35:49.490 Vendor ID: 0000 00:35:49.490 Subsystem Vendor ID: 0000 00:35:49.490 Serial Number: fd62e040a1d0a0e09f6d 00:35:49.490 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:49.490 Firmware Version: 6.7.0-68 00:35:49.490 Recommended Arb Burst: 6 00:35:49.490 IEEE OUI Identifier: 00 00 00 00:35:49.490 Multi-path I/O 00:35:49.490 May have multiple subsystem ports: Yes 00:35:49.490 May have multiple controllers: Yes 00:35:49.490 Associated with SR-IOV VF: No 00:35:49.490 Max Data Transfer Size: Unlimited 00:35:49.490 Max Number of Namespaces: 1024 00:35:49.490 Max Number of I/O Queues: 128 00:35:49.490 NVMe Specification Version (VS): 1.3 00:35:49.490 NVMe Specification Version (Identify): 1.3 00:35:49.490 Maximum Queue Entries: 1024 00:35:49.490 Contiguous Queues Required: No 00:35:49.490 Arbitration Mechanisms Supported 00:35:49.490 Weighted Round Robin: Not Supported 00:35:49.490 Vendor Specific: Not Supported 00:35:49.490 Reset Timeout: 7500 ms 00:35:49.490 Doorbell Stride: 4 bytes 00:35:49.490 NVM Subsystem Reset: Not Supported 00:35:49.490 Command Sets Supported 00:35:49.490 NVM Command Set: Supported 00:35:49.490 Boot Partition: Not Supported 00:35:49.490 Memory Page Size Minimum: 4096 bytes 00:35:49.490 Memory Page Size Maximum: 4096 bytes 00:35:49.490 Persistent Memory Region: Not Supported 00:35:49.490 Optional Asynchronous Events Supported 00:35:49.490 Namespace Attribute Notices: Supported 00:35:49.490 Firmware Activation Notices: Not Supported 00:35:49.490 ANA Change Notices: Supported 00:35:49.490 PLE Aggregate Log Change Notices: Not Supported 00:35:49.490 LBA Status Info Alert Notices: Not Supported 00:35:49.490 EGE Aggregate Log Change Notices: Not Supported 00:35:49.490 Normal NVM Subsystem Shutdown event: Not Supported 00:35:49.490 Zone Descriptor Change Notices: Not Supported 00:35:49.490 Discovery Log Change Notices: Not Supported 00:35:49.490 Controller Attributes 00:35:49.490 128-bit Host Identifier: Supported 00:35:49.490 Non-Operational Permissive Mode: Not Supported 00:35:49.490 NVM Sets: Not Supported 00:35:49.490 Read Recovery Levels: Not Supported 00:35:49.490 Endurance Groups: Not Supported 00:35:49.490 Predictable Latency Mode: Not Supported 00:35:49.490 Traffic Based Keep ALive: Supported 00:35:49.490 Namespace Granularity: Not Supported 00:35:49.490 SQ Associations: Not Supported 00:35:49.490 UUID List: Not Supported 00:35:49.490 Multi-Domain Subsystem: Not Supported 00:35:49.490 Fixed Capacity Management: Not Supported 00:35:49.490 Variable Capacity Management: Not Supported 00:35:49.490 Delete Endurance Group: Not Supported 00:35:49.490 Delete NVM Set: Not Supported 00:35:49.490 Extended LBA Formats Supported: Not Supported 00:35:49.490 Flexible Data Placement Supported: Not Supported 00:35:49.490 00:35:49.490 Controller Memory Buffer Support 00:35:49.490 ================================ 00:35:49.490 Supported: No 00:35:49.490 00:35:49.490 Persistent Memory Region Support 00:35:49.490 ================================ 00:35:49.490 Supported: No 00:35:49.490 00:35:49.490 Admin Command Set Attributes 00:35:49.490 ============================ 00:35:49.490 Security Send/Receive: Not Supported 00:35:49.490 Format NVM: Not Supported 00:35:49.490 Firmware Activate/Download: Not Supported 00:35:49.490 Namespace Management: Not Supported 00:35:49.490 Device Self-Test: Not Supported 00:35:49.490 Directives: Not Supported 00:35:49.490 NVMe-MI: Not Supported 00:35:49.490 Virtualization Management: Not Supported 00:35:49.490 Doorbell Buffer Config: Not Supported 00:35:49.490 Get LBA Status Capability: Not Supported 00:35:49.490 Command & Feature Lockdown Capability: Not Supported 00:35:49.490 Abort Command Limit: 4 00:35:49.490 Async Event Request Limit: 4 00:35:49.490 Number of Firmware Slots: N/A 00:35:49.490 Firmware Slot 1 Read-Only: N/A 00:35:49.490 Firmware Activation Without Reset: N/A 00:35:49.490 Multiple Update Detection Support: N/A 00:35:49.490 Firmware Update Granularity: No Information Provided 00:35:49.490 Per-Namespace SMART Log: Yes 00:35:49.490 Asymmetric Namespace Access Log Page: Supported 00:35:49.490 ANA Transition Time : 10 sec 00:35:49.490 00:35:49.490 Asymmetric Namespace Access Capabilities 00:35:49.490 ANA Optimized State : Supported 00:35:49.490 ANA Non-Optimized State : Supported 00:35:49.490 ANA Inaccessible State : Supported 00:35:49.490 ANA Persistent Loss State : Supported 00:35:49.490 ANA Change State : Supported 00:35:49.490 ANAGRPID is not changed : No 00:35:49.490 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:49.490 00:35:49.490 ANA Group Identifier Maximum : 128 00:35:49.490 Number of ANA Group Identifiers : 128 00:35:49.490 Max Number of Allowed Namespaces : 1024 00:35:49.490 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:49.490 Command Effects Log Page: Supported 00:35:49.490 Get Log Page Extended Data: Supported 00:35:49.490 Telemetry Log Pages: Not Supported 00:35:49.490 Persistent Event Log Pages: Not Supported 00:35:49.490 Supported Log Pages Log Page: May Support 00:35:49.490 Commands Supported & Effects Log Page: Not Supported 00:35:49.490 Feature Identifiers & Effects Log Page:May Support 00:35:49.490 NVMe-MI Commands & Effects Log Page: May Support 00:35:49.490 Data Area 4 for Telemetry Log: Not Supported 00:35:49.490 Error Log Page Entries Supported: 128 00:35:49.490 Keep Alive: Supported 00:35:49.490 Keep Alive Granularity: 1000 ms 00:35:49.490 00:35:49.490 NVM Command Set Attributes 00:35:49.490 ========================== 00:35:49.490 Submission Queue Entry Size 00:35:49.490 Max: 64 00:35:49.490 Min: 64 00:35:49.490 Completion Queue Entry Size 00:35:49.490 Max: 16 00:35:49.490 Min: 16 00:35:49.490 Number of Namespaces: 1024 00:35:49.490 Compare Command: Not Supported 00:35:49.490 Write Uncorrectable Command: Not Supported 00:35:49.490 Dataset Management Command: Supported 00:35:49.490 Write Zeroes Command: Supported 00:35:49.490 Set Features Save Field: Not Supported 00:35:49.490 Reservations: Not Supported 00:35:49.490 Timestamp: Not Supported 00:35:49.490 Copy: Not Supported 00:35:49.490 Volatile Write Cache: Present 00:35:49.490 Atomic Write Unit (Normal): 1 00:35:49.490 Atomic Write Unit (PFail): 1 00:35:49.490 Atomic Compare & Write Unit: 1 00:35:49.490 Fused Compare & Write: Not Supported 00:35:49.490 Scatter-Gather List 00:35:49.490 SGL Command Set: Supported 00:35:49.490 SGL Keyed: Not Supported 00:35:49.490 SGL Bit Bucket Descriptor: Not Supported 00:35:49.490 SGL Metadata Pointer: Not Supported 00:35:49.490 Oversized SGL: Not Supported 00:35:49.490 SGL Metadata Address: Not Supported 00:35:49.490 SGL Offset: Supported 00:35:49.490 Transport SGL Data Block: Not Supported 00:35:49.490 Replay Protected Memory Block: Not Supported 00:35:49.490 00:35:49.490 Firmware Slot Information 00:35:49.490 ========================= 00:35:49.490 Active slot: 0 00:35:49.490 00:35:49.490 Asymmetric Namespace Access 00:35:49.490 =========================== 00:35:49.490 Change Count : 0 00:35:49.490 Number of ANA Group Descriptors : 1 00:35:49.490 ANA Group Descriptor : 0 00:35:49.490 ANA Group ID : 1 00:35:49.490 Number of NSID Values : 1 00:35:49.490 Change Count : 0 00:35:49.490 ANA State : 1 00:35:49.490 Namespace Identifier : 1 00:35:49.490 00:35:49.490 Commands Supported and Effects 00:35:49.490 ============================== 00:35:49.491 Admin Commands 00:35:49.491 -------------- 00:35:49.491 Get Log Page (02h): Supported 00:35:49.491 Identify (06h): Supported 00:35:49.491 Abort (08h): Supported 00:35:49.491 Set Features (09h): Supported 00:35:49.491 Get Features (0Ah): Supported 00:35:49.491 Asynchronous Event Request (0Ch): Supported 00:35:49.491 Keep Alive (18h): Supported 00:35:49.491 I/O Commands 00:35:49.491 ------------ 00:35:49.491 Flush (00h): Supported 00:35:49.491 Write (01h): Supported LBA-Change 00:35:49.491 Read (02h): Supported 00:35:49.491 Write Zeroes (08h): Supported LBA-Change 00:35:49.491 Dataset Management (09h): Supported 00:35:49.491 00:35:49.491 Error Log 00:35:49.491 ========= 00:35:49.491 Entry: 0 00:35:49.491 Error Count: 0x3 00:35:49.491 Submission Queue Id: 0x0 00:35:49.491 Command Id: 0x5 00:35:49.491 Phase Bit: 0 00:35:49.491 Status Code: 0x2 00:35:49.491 Status Code Type: 0x0 00:35:49.491 Do Not Retry: 1 00:35:49.491 Error Location: 0x28 00:35:49.491 LBA: 0x0 00:35:49.491 Namespace: 0x0 00:35:49.491 Vendor Log Page: 0x0 00:35:49.491 ----------- 00:35:49.491 Entry: 1 00:35:49.491 Error Count: 0x2 00:35:49.491 Submission Queue Id: 0x0 00:35:49.491 Command Id: 0x5 00:35:49.491 Phase Bit: 0 00:35:49.491 Status Code: 0x2 00:35:49.491 Status Code Type: 0x0 00:35:49.491 Do Not Retry: 1 00:35:49.491 Error Location: 0x28 00:35:49.491 LBA: 0x0 00:35:49.491 Namespace: 0x0 00:35:49.491 Vendor Log Page: 0x0 00:35:49.491 ----------- 00:35:49.491 Entry: 2 00:35:49.491 Error Count: 0x1 00:35:49.491 Submission Queue Id: 0x0 00:35:49.491 Command Id: 0x4 00:35:49.491 Phase Bit: 0 00:35:49.491 Status Code: 0x2 00:35:49.491 Status Code Type: 0x0 00:35:49.491 Do Not Retry: 1 00:35:49.491 Error Location: 0x28 00:35:49.491 LBA: 0x0 00:35:49.491 Namespace: 0x0 00:35:49.491 Vendor Log Page: 0x0 00:35:49.491 00:35:49.491 Number of Queues 00:35:49.491 ================ 00:35:49.491 Number of I/O Submission Queues: 128 00:35:49.491 Number of I/O Completion Queues: 128 00:35:49.491 00:35:49.491 ZNS Specific Controller Data 00:35:49.491 ============================ 00:35:49.491 Zone Append Size Limit: 0 00:35:49.491 00:35:49.491 00:35:49.491 Active Namespaces 00:35:49.491 ================= 00:35:49.491 get_feature(0x05) failed 00:35:49.491 Namespace ID:1 00:35:49.491 Command Set Identifier: NVM (00h) 00:35:49.491 Deallocate: Supported 00:35:49.491 Deallocated/Unwritten Error: Not Supported 00:35:49.491 Deallocated Read Value: Unknown 00:35:49.491 Deallocate in Write Zeroes: Not Supported 00:35:49.491 Deallocated Guard Field: 0xFFFF 00:35:49.491 Flush: Supported 00:35:49.491 Reservation: Not Supported 00:35:49.491 Namespace Sharing Capabilities: Multiple Controllers 00:35:49.491 Size (in LBAs): 3750748848 (1788GiB) 00:35:49.491 Capacity (in LBAs): 3750748848 (1788GiB) 00:35:49.491 Utilization (in LBAs): 3750748848 (1788GiB) 00:35:49.491 UUID: 06d4f47d-7b6d-4ab2-8997-94b66127aa07 00:35:49.491 Thin Provisioning: Not Supported 00:35:49.491 Per-NS Atomic Units: Yes 00:35:49.491 Atomic Write Unit (Normal): 8 00:35:49.491 Atomic Write Unit (PFail): 8 00:35:49.491 Preferred Write Granularity: 8 00:35:49.491 Atomic Compare & Write Unit: 8 00:35:49.491 Atomic Boundary Size (Normal): 0 00:35:49.491 Atomic Boundary Size (PFail): 0 00:35:49.491 Atomic Boundary Offset: 0 00:35:49.491 NGUID/EUI64 Never Reused: No 00:35:49.491 ANA group ID: 1 00:35:49.491 Namespace Write Protected: No 00:35:49.491 Number of LBA Formats: 1 00:35:49.491 Current LBA Format: LBA Format #00 00:35:49.491 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:49.491 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:49.491 14:38:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:49.491 rmmod nvme_tcp 00:35:49.491 rmmod nvme_fabrics 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:49.491 14:38:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:52.035 14:38:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:55.337 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:55.337 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:55.598 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:55.598 00:35:55.598 real 0m19.835s 00:35:55.598 user 0m5.311s 00:35:55.598 sys 0m11.612s 00:35:55.598 14:38:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:55.598 14:38:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:55.598 ************************************ 00:35:55.598 END TEST nvmf_identify_kernel_target 00:35:55.598 ************************************ 00:35:55.598 14:38:19 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:55.598 14:38:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:55.598 14:38:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:55.598 14:38:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:55.598 ************************************ 00:35:55.598 START TEST nvmf_auth_host 00:35:55.598 ************************************ 00:35:55.598 14:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:55.860 * Looking for test storage... 00:35:55.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:35:55.860 14:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:04.066 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:04.066 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:04.066 Found net devices under 0000:31:00.0: cvl_0_0 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:04.066 Found net devices under 0000:31:00.1: cvl_0_1 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:04.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:36:04.066 00:36:04.066 --- 10.0.0.2 ping statistics --- 00:36:04.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.066 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:36:04.066 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:36:04.067 00:36:04.067 --- 10.0.0.1 ping statistics --- 00:36:04.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.067 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=785387 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 785387 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 785387 ']' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e12beecafbc45ae66ab5ba26f5b1279 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5LP 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e12beecafbc45ae66ab5ba26f5b1279 0 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e12beecafbc45ae66ab5ba26f5b1279 0 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e12beecafbc45ae66ab5ba26f5b1279 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5LP 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5LP 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5LP 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ea1ae65231356537ba5ff2ae3eb04dbfb61257b4cade781f9098f133010f2919 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lOZ 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ea1ae65231356537ba5ff2ae3eb04dbfb61257b4cade781f9098f133010f2919 3 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ea1ae65231356537ba5ff2ae3eb04dbfb61257b4cade781f9098f133010f2919 3 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ea1ae65231356537ba5ff2ae3eb04dbfb61257b4cade781f9098f133010f2919 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:04.067 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lOZ 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lOZ 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lOZ 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6d35ac8db42af3e309b62023540b747728f345914501c4a 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DQn 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6d35ac8db42af3e309b62023540b747728f345914501c4a 0 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6d35ac8db42af3e309b62023540b747728f345914501c4a 0 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6d35ac8db42af3e309b62023540b747728f345914501c4a 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DQn 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DQn 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DQn 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f6badf2f84d24a7cbb1d3f2c3f71248ab75d4d5264da1183 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.A8M 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f6badf2f84d24a7cbb1d3f2c3f71248ab75d4d5264da1183 2 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f6badf2f84d24a7cbb1d3f2c3f71248ab75d4d5264da1183 2 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f6badf2f84d24a7cbb1d3f2c3f71248ab75d4d5264da1183 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.A8M 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.A8M 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.A8M 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb65304c37e12c602d3df5b4fa16aeb1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T9b 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb65304c37e12c602d3df5b4fa16aeb1 1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb65304c37e12c602d3df5b4fa16aeb1 1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb65304c37e12c602d3df5b4fa16aeb1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T9b 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T9b 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.T9b 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=846de2b8ada080fc1356609764da3be0 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bNe 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 846de2b8ada080fc1356609764da3be0 1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 846de2b8ada080fc1356609764da3be0 1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=846de2b8ada080fc1356609764da3be0 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:36:04.327 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.587 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bNe 00:36:04.587 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bNe 00:36:04.587 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bNe 00:36:04.587 14:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:04.587 14:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7cdb5b2b559d4091eb3e51fbeef8b61c6b53e2f1c4c4dbe 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bm4 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7cdb5b2b559d4091eb3e51fbeef8b61c6b53e2f1c4c4dbe 2 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7cdb5b2b559d4091eb3e51fbeef8b61c6b53e2f1c4c4dbe 2 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7cdb5b2b559d4091eb3e51fbeef8b61c6b53e2f1c4c4dbe 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bm4 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bm4 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bm4 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=afdcb67def03a8ef161fabbf3aa17ffc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iZ5 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key afdcb67def03a8ef161fabbf3aa17ffc 0 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 afdcb67def03a8ef161fabbf3aa17ffc 0 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=afdcb67def03a8ef161fabbf3aa17ffc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iZ5 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iZ5 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.iZ5 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e2d1729e37cc99c6c9cb3736f5369a6dfed7a415479024b1f7929ceea9aec042 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jtc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e2d1729e37cc99c6c9cb3736f5369a6dfed7a415479024b1f7929ceea9aec042 3 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e2d1729e37cc99c6c9cb3736f5369a6dfed7a415479024b1f7929ceea9aec042 3 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e2d1729e37cc99c6c9cb3736f5369a6dfed7a415479024b1f7929ceea9aec042 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jtc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jtc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jtc 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 785387 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 785387 ']' 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:04.587 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5LP 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lOZ ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lOZ 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DQn 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.A8M ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A8M 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.T9b 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bNe ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bNe 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bm4 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.iZ5 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.iZ5 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jtc 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:04.846 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:04.847 14:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:09.041 Waiting for block devices as requested 00:36:09.041 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:09.041 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:09.300 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:09.300 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:09.559 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:09.559 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:09.559 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:09.559 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:09.818 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:09.818 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:09.818 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:10.387 14:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:10.387 14:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:10.388 14:38:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:10.388 No valid GPT data, bailing 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:10.388 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:36:10.648 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:36:10.649 00:36:10.649 Discovery Log Number of Records 2, Generation counter 2 00:36:10.649 =====Discovery Log Entry 0====== 00:36:10.649 trtype: tcp 00:36:10.649 adrfam: ipv4 00:36:10.649 subtype: current discovery subsystem 00:36:10.649 treq: not specified, sq flow control disable supported 00:36:10.649 portid: 1 00:36:10.649 trsvcid: 4420 00:36:10.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:10.649 traddr: 10.0.0.1 00:36:10.649 eflags: none 00:36:10.649 sectype: none 00:36:10.649 =====Discovery Log Entry 1====== 00:36:10.649 trtype: tcp 00:36:10.649 adrfam: ipv4 00:36:10.649 subtype: nvme subsystem 00:36:10.649 treq: not specified, sq flow control disable supported 00:36:10.649 portid: 1 00:36:10.649 trsvcid: 4420 00:36:10.649 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:10.649 traddr: 10.0.0.1 00:36:10.649 eflags: none 00:36:10.649 sectype: none 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.649 nvme0n1 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.649 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.910 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.910 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.911 nvme0n1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:10.911 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.172 nvme0n1 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.172 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.433 nvme0n1 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.434 14:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.695 nvme0n1 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.695 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.696 nvme0n1 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.696 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.956 nvme0n1 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:11.956 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.217 nvme0n1 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.217 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.478 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.479 14:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.479 nvme0n1 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.479 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 nvme0n1 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.740 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:13.000 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 nvme0n1 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.001 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.261 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.261 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.261 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.262 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 nvme0n1 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 14:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:13.522 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.523 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.785 nvme0n1 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.785 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.046 nvme0n1 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.046 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:14.307 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.308 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.568 nvme0n1 00:36:14.568 14:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.568 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.569 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.829 nvme0n1 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.829 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:14.830 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.401 nvme0n1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.401 14:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.973 nvme0n1 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:15.973 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.546 nvme0n1 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.546 14:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.546 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.807 nvme0n1 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.807 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.068 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.328 nvme0n1 00:36:17.328 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.328 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.328 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.329 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.329 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.329 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.589 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.589 14:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.589 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.589 14:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.589 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.590 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.161 nvme0n1 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.161 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.421 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.422 14:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.994 nvme0n1 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:18.994 14:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.973 nvme0n1 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.973 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:19.974 14:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.544 nvme0n1 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.544 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:20.804 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.376 nvme0n1 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.376 14:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.376 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.376 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.376 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.376 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 nvme0n1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.637 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.898 nvme0n1 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:21.898 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:21.899 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:21.899 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.899 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:21.899 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.160 nvme0n1 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.160 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 nvme0n1 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 nvme0n1 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.422 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:22.683 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.684 nvme0n1 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:22.684 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.945 nvme0n1 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:22.945 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.206 nvme0n1 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.206 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.207 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.468 14:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.468 14:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.468 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.468 14:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.468 nvme0n1 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.468 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.469 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.731 nvme0n1 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.731 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.993 nvme0n1 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.993 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.254 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.255 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.516 nvme0n1 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.516 14:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.516 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.777 nvme0n1 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.777 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:24.778 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.039 nvme0n1 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.039 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.300 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 nvme0n1 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.560 14:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:25.560 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:25.561 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.131 nvme0n1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.131 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.391 nvme0n1 00:36:26.391 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.391 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.391 14:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.391 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.391 14:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.391 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.651 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.911 nvme0n1 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.911 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.171 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.171 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.171 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.171 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.172 14:38:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.431 nvme0n1 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.431 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.432 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.432 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.432 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.691 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.692 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.951 nvme0n1 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:27.951 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:28.211 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.212 14:38:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.780 nvme0n1 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:28.780 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.040 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.041 14:38:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.612 nvme0n1 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.612 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:29.613 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.555 nvme0n1 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.555 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:30.556 14:38:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:30.556 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:30.556 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.556 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.556 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.126 nvme0n1 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.127 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.388 14:38:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.959 nvme0n1 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:31.959 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 nvme0n1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.225 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.226 14:38:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.226 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.226 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.226 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.491 nvme0n1 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.491 14:38:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.491 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.492 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.753 nvme0n1 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.753 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 nvme0n1 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 nvme0n1 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.014 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.274 nvme0n1 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.274 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.533 14:38:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.533 nvme0n1 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.533 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.853 nvme0n1 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:33.853 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.117 nvme0n1 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.117 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.378 nvme0n1 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.378 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.379 14:38:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 nvme0n1 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:34.904 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.166 nvme0n1 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.166 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.428 nvme0n1 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.428 14:38:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.428 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.689 nvme0n1 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.689 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.950 nvme0n1 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.950 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.211 14:38:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.472 nvme0n1 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.472 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.732 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.992 nvme0n1 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.992 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.253 14:39:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.513 nvme0n1 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.513 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:37.775 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.037 nvme0n1 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.037 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.297 14:39:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.557 nvme0n1 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.557 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:38.817 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUxMmJlZWNhZmJjNDVhZTY2YWI1YmEyNmY1YjEyNzmHDEMZ: 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: ]] 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWExYWU2NTIzMTM1NjUzN2JhNWZmMmFlM2ViMDRkYmZiNjEyNTdiNGNhZGU3ODFmOTA5OGYxMzMwMTBmMjkxOf0nqsk=: 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.818 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.389 nvme0n1 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.389 14:39:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.389 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.389 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.389 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.389 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:39.649 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 nvme0n1 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWI2NTMwNGMzN2UxMmM2MDJkM2RmNWI0ZmExNmFlYjEL3NKX: 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: ]] 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQ2ZGUyYjhhZGEwODBmYzEzNTY2MDk3NjRkYTNiZTDbNUvt: 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.220 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:40.481 14:39:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.052 nvme0n1 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.052 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdjZGI1YjJiNTU5ZDQwOTFlYjNlNTFmYmVlZjhiNjFjNmI1M2UyZjFjNGM0ZGJlwngMvg==: 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: ]] 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWZkY2I2N2RlZjAzYThlZjE2MWZhYmJmM2FhMTdmZmOZSvGl: 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.053 14:39:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.995 nvme0n1 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTJkMTcyOWUzN2NjOTljNmM5Y2IzNzM2ZjUzNjlhNmRmZWQ3YTQxNTQ3OTAyNGIxZjc5MjljZWVhOWFlYzA0MpFhweo=: 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:41.995 14:39:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.565 nvme0n1 00:36:42.565 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.565 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.565 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.565 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.565 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZkMzVhYzhkYjQyYWYzZTMwOWI2MjAyMzU0MGI3NDc3MjhmMzQ1OTE0NTAxYzRh0/qLRQ==: 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjZiYWRmMmY4NGQyNGE3Y2JiMWQzZjJjM2Y3MTI0OGFiNzVkNGQ1MjY0ZGExMTgzHIc2gw==: 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 request: 00:36:42.827 { 00:36:42.827 "name": "nvme0", 00:36:42.827 "trtype": "tcp", 00:36:42.827 "traddr": "10.0.0.1", 00:36:42.827 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.827 "adrfam": "ipv4", 00:36:42.827 "trsvcid": "4420", 00:36:42.827 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.827 "method": "bdev_nvme_attach_controller", 00:36:42.827 "req_id": 1 00:36:42.827 } 00:36:42.827 Got JSON-RPC error response 00:36:42.827 response: 00:36:42.827 { 00:36:42.827 "code": -5, 00:36:42.827 "message": "Input/output error" 00:36:42.827 } 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 request: 00:36:42.827 { 00:36:42.827 "name": "nvme0", 00:36:42.827 "trtype": "tcp", 00:36:42.827 "traddr": "10.0.0.1", 00:36:42.827 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.827 "adrfam": "ipv4", 00:36:42.827 "trsvcid": "4420", 00:36:42.827 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.827 "dhchap_key": "key2", 00:36:42.827 "method": "bdev_nvme_attach_controller", 00:36:42.827 "req_id": 1 00:36:42.827 } 00:36:42.827 Got JSON-RPC error response 00:36:42.827 response: 00:36:42.827 { 00:36:42.827 "code": -5, 00:36:42.827 "message": "Input/output error" 00:36:42.827 } 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:42.827 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:42.828 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.088 request: 00:36:43.088 { 00:36:43.088 "name": "nvme0", 00:36:43.088 "trtype": "tcp", 00:36:43.088 "traddr": "10.0.0.1", 00:36:43.088 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:43.088 "adrfam": "ipv4", 00:36:43.088 "trsvcid": "4420", 00:36:43.088 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:43.088 "dhchap_key": "key1", 00:36:43.088 "dhchap_ctrlr_key": "ckey2", 00:36:43.088 "method": "bdev_nvme_attach_controller", 00:36:43.088 "req_id": 1 00:36:43.088 } 00:36:43.088 Got JSON-RPC error response 00:36:43.088 response: 00:36:43.088 { 00:36:43.088 "code": -5, 00:36:43.088 "message": "Input/output error" 00:36:43.088 } 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:43.088 rmmod nvme_tcp 00:36:43.088 rmmod nvme_fabrics 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 785387 ']' 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 785387 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 785387 ']' 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 785387 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 785387 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 785387' 00:36:43.088 killing process with pid 785387 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 785387 00:36:43.088 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 785387 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:43.349 14:39:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:45.259 14:39:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:49.464 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:49.464 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:49.464 14:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5LP /tmp/spdk.key-null.DQn /tmp/spdk.key-sha256.T9b /tmp/spdk.key-sha384.bm4 /tmp/spdk.key-sha512.jtc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:49.464 14:39:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:52.799 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:52.799 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:53.060 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:53.060 00:36:53.060 real 0m57.379s 00:36:53.060 user 0m50.456s 00:36:53.060 sys 0m15.715s 00:36:53.060 14:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:53.060 14:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.060 ************************************ 00:36:53.060 END TEST nvmf_auth_host 00:36:53.060 ************************************ 00:36:53.060 14:39:16 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:36:53.060 14:39:16 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:53.060 14:39:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:53.060 14:39:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:53.060 14:39:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:53.060 ************************************ 00:36:53.060 START TEST nvmf_digest 00:36:53.060 ************************************ 00:36:53.060 14:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:53.322 * Looking for test storage... 00:36:53.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:53.322 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:36:53.323 14:39:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:01.463 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:01.463 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:01.463 Found net devices under 0000:31:00.0: cvl_0_0 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:01.463 Found net devices under 0000:31:00.1: cvl_0_1 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:01.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:37:01.463 00:37:01.463 --- 10.0.0.2 ping statistics --- 00:37:01.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.463 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:37:01.463 00:37:01.463 --- 10.0.0.1 ping statistics --- 00:37:01.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.463 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.463 ************************************ 00:37:01.463 START TEST nvmf_digest_clean 00:37:01.463 ************************************ 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=802936 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 802936 00:37:01.463 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 802936 ']' 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:01.464 14:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.464 [2024-06-07 14:39:25.017364] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:01.464 [2024-06-07 14:39:25.017428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.464 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.464 [2024-06-07 14:39:25.094835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.724 [2024-06-07 14:39:25.133366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.724 [2024-06-07 14:39:25.133412] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.724 [2024-06-07 14:39:25.133420] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.724 [2024-06-07 14:39:25.133427] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.724 [2024-06-07 14:39:25.133433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.724 [2024-06-07 14:39:25.133463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.295 null0 00:37:02.295 [2024-06-07 14:39:25.889665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.295 [2024-06-07 14:39:25.913842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=803113 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 803113 /var/tmp/bperf.sock 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 803113 ']' 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:02.295 14:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.556 [2024-06-07 14:39:25.965935] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:02.556 [2024-06-07 14:39:25.965984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803113 ] 00:37:02.556 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.556 [2024-06-07 14:39:26.046612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.556 [2024-06-07 14:39:26.078362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.128 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:03.128 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:03.128 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:03.128 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:03.128 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:03.388 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:03.388 14:39:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:03.649 nvme0n1 00:37:03.649 14:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:03.649 14:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.908 Running I/O for 2 seconds... 00:37:05.820 00:37:05.820 Latency(us) 00:37:05.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:05.820 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:05.820 nvme0n1 : 2.00 20227.98 79.02 0.00 0.00 6319.14 3222.19 13598.72 00:37:05.820 =================================================================================================================== 00:37:05.820 Total : 20227.98 79.02 0.00 0.00 6319.14 3222.19 13598.72 00:37:05.820 0 00:37:05.820 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:05.820 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:05.820 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:05.820 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:05.820 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:05.820 | select(.opcode=="crc32c") 00:37:05.820 | "\(.module_name) \(.executed)"' 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 803113 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 803113 ']' 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 803113 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 803113 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 803113' 00:37:06.080 killing process with pid 803113 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 803113 00:37:06.080 Received shutdown signal, test time was about 2.000000 seconds 00:37:06.080 00:37:06.080 Latency(us) 00:37:06.080 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.080 =================================================================================================================== 00:37:06.080 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 803113 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:06.080 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=803877 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 803877 /var/tmp/bperf.sock 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 803877 ']' 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:06.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:06.340 14:39:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:06.340 [2024-06-07 14:39:29.775800] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:06.340 [2024-06-07 14:39:29.775858] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803877 ] 00:37:06.340 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:06.340 Zero copy mechanism will not be used. 00:37:06.340 EAL: No free 2048 kB hugepages reported on node 1 00:37:06.340 [2024-06-07 14:39:29.856776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:06.340 [2024-06-07 14:39:29.888074] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.912 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:06.912 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:06.912 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:06.912 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:06.912 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:07.173 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:07.173 14:39:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:07.433 nvme0n1 00:37:07.694 14:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:07.694 14:39:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:07.694 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:07.694 Zero copy mechanism will not be used. 00:37:07.694 Running I/O for 2 seconds... 00:37:09.606 00:37:09.606 Latency(us) 00:37:09.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.606 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:09.606 nvme0n1 : 2.00 3194.73 399.34 0.00 0.00 5004.96 1167.36 13216.43 00:37:09.606 =================================================================================================================== 00:37:09.606 Total : 3194.73 399.34 0.00 0.00 5004.96 1167.36 13216.43 00:37:09.606 0 00:37:09.606 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:09.606 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:09.606 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:09.606 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:09.606 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:09.606 | select(.opcode=="crc32c") 00:37:09.606 | "\(.module_name) \(.executed)"' 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 803877 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 803877 ']' 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 803877 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 803877 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 803877' 00:37:09.868 killing process with pid 803877 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 803877 00:37:09.868 Received shutdown signal, test time was about 2.000000 seconds 00:37:09.868 00:37:09.868 Latency(us) 00:37:09.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.868 =================================================================================================================== 00:37:09.868 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:09.868 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 803877 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=804643 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 804643 /var/tmp/bperf.sock 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 804643 ']' 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:10.128 14:39:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:10.128 [2024-06-07 14:39:33.581882] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:10.128 [2024-06-07 14:39:33.581939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804643 ] 00:37:10.128 EAL: No free 2048 kB hugepages reported on node 1 00:37:10.128 [2024-06-07 14:39:33.659716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.128 [2024-06-07 14:39:33.687782] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:11.068 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:11.328 nvme0n1 00:37:11.328 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:11.328 14:39:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:11.588 Running I/O for 2 seconds... 00:37:13.560 00:37:13.560 Latency(us) 00:37:13.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.560 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.560 nvme0n1 : 2.01 22023.81 86.03 0.00 0.00 5806.25 2280.11 14417.92 00:37:13.560 =================================================================================================================== 00:37:13.560 Total : 22023.81 86.03 0.00 0.00 5806.25 2280.11 14417.92 00:37:13.560 0 00:37:13.560 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:13.560 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:13.560 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:13.561 | select(.opcode=="crc32c") 00:37:13.561 | "\(.module_name) \(.executed)"' 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 804643 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 804643 ']' 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 804643 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:13.561 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 804643 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 804643' 00:37:13.822 killing process with pid 804643 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 804643 00:37:13.822 Received shutdown signal, test time was about 2.000000 seconds 00:37:13.822 00:37:13.822 Latency(us) 00:37:13.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.822 =================================================================================================================== 00:37:13.822 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 804643 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=805332 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 805332 /var/tmp/bperf.sock 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 805332 ']' 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:13.822 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:13.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:13.823 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:13.823 14:39:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:13.823 [2024-06-07 14:39:37.378952] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:13.823 [2024-06-07 14:39:37.379003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805332 ] 00:37:13.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:13.823 Zero copy mechanism will not be used. 00:37:13.823 EAL: No free 2048 kB hugepages reported on node 1 00:37:13.823 [2024-06-07 14:39:37.459388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.083 [2024-06-07 14:39:37.485475] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.656 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:14.656 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:37:14.656 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:14.656 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:14.656 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:14.917 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:14.917 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:15.178 nvme0n1 00:37:15.178 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:15.178 14:39:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:15.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:15.178 Zero copy mechanism will not be used. 00:37:15.178 Running I/O for 2 seconds... 00:37:17.725 00:37:17.725 Latency(us) 00:37:17.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.725 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:17.725 nvme0n1 : 2.00 4878.76 609.84 0.00 0.00 3274.15 1774.93 6280.53 00:37:17.725 =================================================================================================================== 00:37:17.725 Total : 4878.76 609.84 0.00 0.00 3274.15 1774.93 6280.53 00:37:17.725 0 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:17.725 | select(.opcode=="crc32c") 00:37:17.725 | "\(.module_name) \(.executed)"' 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 805332 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 805332 ']' 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 805332 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 805332 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 805332' 00:37:17.725 killing process with pid 805332 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 805332 00:37:17.725 Received shutdown signal, test time was about 2.000000 seconds 00:37:17.725 00:37:17.725 Latency(us) 00:37:17.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:17.725 =================================================================================================================== 00:37:17.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:17.725 14:39:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 805332 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 802936 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 802936 ']' 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 802936 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 802936 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 802936' 00:37:17.725 killing process with pid 802936 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 802936 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 802936 00:37:17.725 00:37:17.725 real 0m16.322s 00:37:17.725 user 0m32.070s 00:37:17.725 sys 0m3.375s 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:17.725 ************************************ 00:37:17.725 END TEST nvmf_digest_clean 00:37:17.725 ************************************ 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.725 ************************************ 00:37:17.725 START TEST nvmf_digest_error 00:37:17.725 ************************************ 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=806046 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 806046 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 806046 ']' 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.725 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:17.726 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.726 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:17.726 14:39:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:17.987 [2024-06-07 14:39:41.421520] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:17.987 [2024-06-07 14:39:41.421589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.987 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.987 [2024-06-07 14:39:41.497429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.987 [2024-06-07 14:39:41.536159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.987 [2024-06-07 14:39:41.536208] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.987 [2024-06-07 14:39:41.536216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.987 [2024-06-07 14:39:41.536223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.987 [2024-06-07 14:39:41.536229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.987 [2024-06-07 14:39:41.536248] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.560 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:18.560 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:18.560 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:18.560 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:18.560 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.820 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.821 [2024-06-07 14:39:42.218221] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.821 null0 00:37:18.821 [2024-06-07 14:39:42.292320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.821 [2024-06-07 14:39:42.316504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=806308 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 806308 /var/tmp/bperf.sock 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 806308 ']' 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:18.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:18.821 14:39:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.821 [2024-06-07 14:39:42.370898] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:18.821 [2024-06-07 14:39:42.370946] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806308 ] 00:37:18.821 EAL: No free 2048 kB hugepages reported on node 1 00:37:18.821 [2024-06-07 14:39:42.450835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.081 [2024-06-07 14:39:42.479192] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.651 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:19.651 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:19.651 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:19.651 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:19.911 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:20.172 nvme0n1 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:20.172 14:39:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:20.172 Running I/O for 2 seconds... 00:37:20.172 [2024-06-07 14:39:43.702665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.702695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.717555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.717575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.717581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.728112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.728131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.728137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.741537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.741556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.741562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.754423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.754441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.754448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.767044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.767061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.767067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.778991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.779008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.779014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.791782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.791807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.803995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.804013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.804024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.172 [2024-06-07 14:39:43.815649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.172 [2024-06-07 14:39:43.815666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.172 [2024-06-07 14:39:43.815672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.827809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.827827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.839819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.839837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.839843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.852023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.864057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.864074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.864081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.877415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.877432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.877438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.892094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.892111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.892119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.902406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.902429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.915826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.915844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.915850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.928437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.928454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.928461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.940060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.940077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.940083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.952910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.952928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.952934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.965040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.965063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.977332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.434 [2024-06-07 14:39:43.977349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.434 [2024-06-07 14:39:43.977355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.434 [2024-06-07 14:39:43.990558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:43.990576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:43.990583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.002004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.002022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.002028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.014150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.014167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.027135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.027152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.039468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.039485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.039492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.050642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.050659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.050666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.062442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.062460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.062466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.435 [2024-06-07 14:39:44.076174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.435 [2024-06-07 14:39:44.076192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.435 [2024-06-07 14:39:44.076203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.089314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.089331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.089338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.099908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.099932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.112881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.112899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.125957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.125979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.137523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.137541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.137547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.149737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.149754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.149761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.162479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.162496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.162502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.175246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.175263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.175269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.186837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.186855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.198522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.198539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.198545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.210276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.210294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.210300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.224912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.224930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.224936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.237026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.237044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.237050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.247955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.247972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.247978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.260208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.260226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.260233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.274026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.274044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.274050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.285961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.285979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.298192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.298214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.298220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.310563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.310580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.310587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.322182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.322203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.322209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.695 [2024-06-07 14:39:44.335931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.695 [2024-06-07 14:39:44.335948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.695 [2024-06-07 14:39:44.335958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.347851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.347868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.347875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.358667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.358684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.358690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.371703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.371720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.384031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.384048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.384055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.395768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.395785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.395791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.409134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.409150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.409157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.421080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.421097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.421103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.433943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.433960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.433967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.446323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.446340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.446347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.459018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.459035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.459041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.471632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.471649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.471656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.483743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.483760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.956 [2024-06-07 14:39:44.483766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.956 [2024-06-07 14:39:44.493960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.956 [2024-06-07 14:39:44.493977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.493983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.507521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.507537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.507544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.518984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.519002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.519008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.531770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.531787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.531793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.544057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.544074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.544083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.557591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.557609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.557615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.570093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.570110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.570116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.582086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.582104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.582110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.957 [2024-06-07 14:39:44.593771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:20.957 [2024-06-07 14:39:44.593788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.957 [2024-06-07 14:39:44.593794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.606306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.606324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.606330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.618854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.618872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.618878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.631601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.631618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.631625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.640863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.640880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.640886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.654357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.654377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.654383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.667185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.667206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.667213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.680481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.680498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.680504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.693355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.693372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.703901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.703918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.703924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.716979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.716997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.717003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.729556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.729573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.729580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.741399] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.741416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.741422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.754326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.754343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.754350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.766143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.766159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.766166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.778950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.778968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.778974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.792108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.792125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.792131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.804170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.804187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.804198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.815755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.815772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.815778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.827612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.827629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.827636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.839239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.839256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.839262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.851856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.851873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.851879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.219 [2024-06-07 14:39:44.865274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.219 [2024-06-07 14:39:44.865290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.219 [2024-06-07 14:39:44.865300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.481 [2024-06-07 14:39:44.878541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.481 [2024-06-07 14:39:44.878558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.481 [2024-06-07 14:39:44.878564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.481 [2024-06-07 14:39:44.890188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.481 [2024-06-07 14:39:44.890207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.481 [2024-06-07 14:39:44.890214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.900428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.900444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.900451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.914771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.914794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.928013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.928031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.928037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.940633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.940656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.952452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.952469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.963292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.963309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.963315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.977473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.977493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.977500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:44.989910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:44.989926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:44.989933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.001491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.001508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.001515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.013430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.013447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.013453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.026700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.026717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.026723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.037620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.037637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.037643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.050843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.050860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.050865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.062827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.062844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.062850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.076229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.076245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.076251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.089511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.089528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.089535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.102307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.102330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.112386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.112403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.112410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.482 [2024-06-07 14:39:45.125349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.482 [2024-06-07 14:39:45.125366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.482 [2024-06-07 14:39:45.125373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.136272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.136289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.136296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.149987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.150004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.150011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.163075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.163093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.163099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.174948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.174964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.174971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.187231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.187251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.187257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.200042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.200059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.200065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.211597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.211614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.211621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.224528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.224545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.224552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.235461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.235478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.235485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.248581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.248597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.248603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.261676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.261693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.261699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.272167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.272184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.272191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.284940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.284958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.284964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.297073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.297090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.297096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.310422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.310439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.323068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.323086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.323093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.335532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.335549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.335556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.346219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.346236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.346242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.358921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.358939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.358945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.371764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.371781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.371787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:21.744 [2024-06-07 14:39:45.383040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:21.744 [2024-06-07 14:39:45.383058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:21.744 [2024-06-07 14:39:45.383065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.396552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.396570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.396580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.408733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.408750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.420664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.420688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.433730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.433747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.433753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.446275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.446292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.446299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.459085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.459102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.459108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.469506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.469523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.469529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.483058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.483076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.483082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.496576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.496594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.496600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.006 [2024-06-07 14:39:45.508024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.006 [2024-06-07 14:39:45.508044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.006 [2024-06-07 14:39:45.508051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.520505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.520523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.532632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.532650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.532656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.545695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.545713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.545719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.556398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.556415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.556422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.569457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.569475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.569481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.581853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.581870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.581876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.594405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.594423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.594430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.605719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.605736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.605743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.617476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.617493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.617500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.631348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.631366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.007 [2024-06-07 14:39:45.643538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.007 [2024-06-07 14:39:45.643556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.007 [2024-06-07 14:39:45.643562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.268 [2024-06-07 14:39:45.655988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.268 [2024-06-07 14:39:45.656005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.268 [2024-06-07 14:39:45.656011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.268 [2024-06-07 14:39:45.668256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.268 [2024-06-07 14:39:45.668273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.268 [2024-06-07 14:39:45.668280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.268 [2024-06-07 14:39:45.680205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.268 [2024-06-07 14:39:45.680222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.268 [2024-06-07 14:39:45.680229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.268 [2024-06-07 14:39:45.691107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22578b0) 00:37:22.268 [2024-06-07 14:39:45.691125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:22.268 [2024-06-07 14:39:45.691131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:22.268 00:37:22.268 Latency(us) 00:37:22.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.268 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:22.268 nvme0n1 : 2.05 20221.69 78.99 0.00 0.00 6197.97 2293.76 46093.65 00:37:22.268 =================================================================================================================== 00:37:22.268 Total : 20221.69 78.99 0.00 0.00 6197.97 2293.76 46093.65 00:37:22.268 0 00:37:22.268 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:22.268 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:22.268 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:22.268 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:22.268 | .driver_specific 00:37:22.268 | .nvme_error 00:37:22.268 | .status_code 00:37:22.268 | .command_transient_transport_error' 00:37:22.529 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:37:22.529 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 806308 00:37:22.529 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 806308 ']' 00:37:22.529 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 806308 00:37:22.529 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 806308 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 806308' 00:37:22.530 killing process with pid 806308 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 806308 00:37:22.530 Received shutdown signal, test time was about 2.000000 seconds 00:37:22.530 00:37:22.530 Latency(us) 00:37:22.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:22.530 =================================================================================================================== 00:37:22.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:22.530 14:39:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 806308 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=807059 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 807059 /var/tmp/bperf.sock 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 807059 ']' 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:22.530 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.530 [2024-06-07 14:39:46.127924] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:22.530 [2024-06-07 14:39:46.127982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807059 ] 00:37:22.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:22.530 Zero copy mechanism will not be used. 00:37:22.530 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.790 [2024-06-07 14:39:46.207731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.790 [2024-06-07 14:39:46.235804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.362 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:23.362 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:23.362 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:23.362 14:39:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.624 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.885 nvme0n1 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:23.885 14:39:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.885 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:23.885 Zero copy mechanism will not be used. 00:37:23.885 Running I/O for 2 seconds... 00:37:23.885 [2024-06-07 14:39:47.441019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.441051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.441059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.453604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.453627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.453634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.466150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.466174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.466181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.478341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.478360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.478366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.491865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.491883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.491889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.503485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.503504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.503510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.516726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.516745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.516751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:23.885 [2024-06-07 14:39:47.530171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:23.885 [2024-06-07 14:39:47.530189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.885 [2024-06-07 14:39:47.530200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.539769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.539787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.539794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.549111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.549130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.549136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.559531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.559549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.559555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.570073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.570091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.570097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.580228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.580247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.580253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.589135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.589153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.589159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.600317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.600335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.600342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.610790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.610807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.610814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.620526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.620544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.620550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.630820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.630838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.630844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.638481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.638500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.638506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.648616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.648633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.648643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.656877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.656895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.656901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.667483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.667501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.667507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.677455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.677474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.677481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.683563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.683580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.683587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.689650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.689667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.689674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.695561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.695578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.695585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.704407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.715553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.715571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.715578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.725542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.725561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.725567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.736210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.736229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.147 [2024-06-07 14:39:47.736235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.147 [2024-06-07 14:39:47.745218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.147 [2024-06-07 14:39:47.745236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.148 [2024-06-07 14:39:47.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.148 [2024-06-07 14:39:47.754325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.148 [2024-06-07 14:39:47.754343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.148 [2024-06-07 14:39:47.754350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.148 [2024-06-07 14:39:47.764640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.148 [2024-06-07 14:39:47.764658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.148 [2024-06-07 14:39:47.764664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.148 [2024-06-07 14:39:47.776010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.148 [2024-06-07 14:39:47.776028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.148 [2024-06-07 14:39:47.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.148 [2024-06-07 14:39:47.785313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.148 [2024-06-07 14:39:47.785331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.148 [2024-06-07 14:39:47.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.795814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.795838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.807811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.807829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.807839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.818217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.818235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.818241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.830769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.830787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.830793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.843607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.843625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.843631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.856804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.856823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.856830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.869616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.409 [2024-06-07 14:39:47.869634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.409 [2024-06-07 14:39:47.869640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.409 [2024-06-07 14:39:47.882413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.882432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.882438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.896014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.896031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.896038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.907333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.907350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.907357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.916994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.917015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.917022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.927675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.927693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.927699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.938761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.938779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.950134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.950152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.950158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.958672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.958691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.958697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.969157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.969175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.969181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.980842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.980861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.980867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:47.991430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:47.991448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:47.991455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.002311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.002336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.014210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.014228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.014234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.023728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.023747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.023753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.032566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.032585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.032591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.042648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.042668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.042675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.410 [2024-06-07 14:39:48.052708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.410 [2024-06-07 14:39:48.052726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.410 [2024-06-07 14:39:48.052733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.062631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.062649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.062656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.072594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.072613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.072619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.083056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.083075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.083082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.093158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.093177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.102750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.102769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.102775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.114155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.114173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.114180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.123509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.123527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.123534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.133593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.133612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.133618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.143994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.144019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.154151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.154170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.154176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.163595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.163613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.163619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.173961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.173980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.173986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.183918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.183937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.183943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.194421] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.194439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.194445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.204729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.204747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.213781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.213800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.213806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.224186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.224209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.224215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.234295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.234314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.234320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.243476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.243494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.243501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.251376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.251395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.251401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.259938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.259956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.259966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.265157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.265175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.265181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.672 [2024-06-07 14:39:48.274323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.672 [2024-06-07 14:39:48.274340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.672 [2024-06-07 14:39:48.274347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.673 [2024-06-07 14:39:48.283496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.673 [2024-06-07 14:39:48.283514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.673 [2024-06-07 14:39:48.283520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.673 [2024-06-07 14:39:48.295578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.673 [2024-06-07 14:39:48.295596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.673 [2024-06-07 14:39:48.295602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.673 [2024-06-07 14:39:48.306659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.673 [2024-06-07 14:39:48.306678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.673 [2024-06-07 14:39:48.306684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.673 [2024-06-07 14:39:48.316616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.673 [2024-06-07 14:39:48.316634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.673 [2024-06-07 14:39:48.316640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.934 [2024-06-07 14:39:48.328092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.934 [2024-06-07 14:39:48.328110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.934 [2024-06-07 14:39:48.328116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.934 [2024-06-07 14:39:48.338289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.934 [2024-06-07 14:39:48.338307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.934 [2024-06-07 14:39:48.338314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.934 [2024-06-07 14:39:48.348475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.934 [2024-06-07 14:39:48.348495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.934 [2024-06-07 14:39:48.348502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.934 [2024-06-07 14:39:48.359295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.934 [2024-06-07 14:39:48.359313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.934 [2024-06-07 14:39:48.359319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.934 [2024-06-07 14:39:48.368615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.368633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.368639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.377615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.377632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.377639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.388312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.388330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.388336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.400253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.400271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.400277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.411464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.411481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.411488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.421396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.421414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.421420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.429848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.429865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.429871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.441364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.441388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.451933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.451951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.451957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.461472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.461490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.461496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.471534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.471552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.471558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.483062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.483086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.491151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.491169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.491175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.500355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.500380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.509003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.509022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.509028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.520684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.520703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.520712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.531197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.531222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.542874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.542893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.542899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.553200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.553218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.553224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.563145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.563163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.563170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:24.935 [2024-06-07 14:39:48.574716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:24.935 [2024-06-07 14:39:48.574734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.935 [2024-06-07 14:39:48.574741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.585150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.585169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.585175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.596590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.596608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.596614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.607251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.607269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.607275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.617532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.617554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.617560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.626940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.626957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.626964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.636843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.197 [2024-06-07 14:39:48.636861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.197 [2024-06-07 14:39:48.636867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.197 [2024-06-07 14:39:48.648490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.648508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.659417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.659436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.659442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.669744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.669762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.669768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.681026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.681044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.681050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.691541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.691559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.691566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.702946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.702964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.702970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.712885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.712903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.712910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.721669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.721687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.721694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.732246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.732265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.732271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.742346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.742364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.752050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.752069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.752075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.761332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.761351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.761357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.770613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.770632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.770638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.781218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.781237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.781243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.793893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.793911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.793921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.800802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.800820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.810470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.810489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.810495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.820468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.820487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.820493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.832280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.832298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.832305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.198 [2024-06-07 14:39:48.842493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.198 [2024-06-07 14:39:48.842512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.198 [2024-06-07 14:39:48.842518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.852830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.852850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.852856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.862843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.862862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.862868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.873192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.873215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.873221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.882877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.882899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.882905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.894175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.894193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.894203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.903855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.903873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.903880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.914830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.914849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.914855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.924067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.924086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.924092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.934152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.934171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.934178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.943213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.943232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.943238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.952833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.952852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.952858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.962766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.962785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.962794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.972776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.972794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.972800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.984726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.984745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.984751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:48.995747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:48.995766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:48.995772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.006480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.006499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.006505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.016812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.016831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.016838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.027905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.027924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.027930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.039429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.039447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.039453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.048840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.048859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.048865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.059131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.059154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.059160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.460 [2024-06-07 14:39:49.068972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.460 [2024-06-07 14:39:49.068991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.460 [2024-06-07 14:39:49.068997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.461 [2024-06-07 14:39:49.078927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.461 [2024-06-07 14:39:49.078946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.461 [2024-06-07 14:39:49.078952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.461 [2024-06-07 14:39:49.089181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.461 [2024-06-07 14:39:49.089204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.461 [2024-06-07 14:39:49.089211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.461 [2024-06-07 14:39:49.099233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.461 [2024-06-07 14:39:49.099252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.461 [2024-06-07 14:39:49.099258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.111212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.111231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.111237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.121050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.121069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.121075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.131377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.131395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.131401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.135286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.135304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.135311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.145516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.145535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.145541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.154153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.154172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.164420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.164446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.175843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.175862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.175868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.721 [2024-06-07 14:39:49.189100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.721 [2024-06-07 14:39:49.189119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.721 [2024-06-07 14:39:49.189125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.201835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.201854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.201860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.214891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.214910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.214916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.227333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.227352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.227358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.239799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.239818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.239827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.248561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.248580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.248586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.258496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.258515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.268001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.268020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.268026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.278885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.278903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.278909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.289552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.289570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.289576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.299648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.299667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.299673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.307539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.307558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.307564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.317766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.317785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.317791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.328625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.328650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.328657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.339629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.339648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.339654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.349633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.349652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.349658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.722 [2024-06-07 14:39:49.360553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.722 [2024-06-07 14:39:49.360572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.722 [2024-06-07 14:39:49.360578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.370449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.370469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.370475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.380491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.380509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.380516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.388251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.388269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.388275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.398681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.398700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.398706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.408525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.408544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.408550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.419124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.419143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.419149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:25.983 [2024-06-07 14:39:49.427866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24eb1f0) 00:37:25.983 [2024-06-07 14:39:49.427884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.983 [2024-06-07 14:39:49.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:25.983 00:37:25.983 Latency(us) 00:37:25.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.983 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:25.983 nvme0n1 : 2.00 3010.86 376.36 0.00 0.00 5310.64 757.76 13598.72 00:37:25.983 =================================================================================================================== 00:37:25.983 Total : 3010.86 376.36 0.00 0.00 5310.64 757.76 13598.72 00:37:25.983 0 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:25.983 | .driver_specific 00:37:25.983 | .nvme_error 00:37:25.983 | .status_code 00:37:25.983 | .command_transient_transport_error' 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 807059 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 807059 ']' 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 807059 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:25.983 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 807059 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 807059' 00:37:26.244 killing process with pid 807059 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 807059 00:37:26.244 Received shutdown signal, test time was about 2.000000 seconds 00:37:26.244 00:37:26.244 Latency(us) 00:37:26.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.244 =================================================================================================================== 00:37:26.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 807059 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=807756 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 807756 /var/tmp/bperf.sock 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 807756 ']' 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:26.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:26.244 14:39:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:26.244 [2024-06-07 14:39:49.819304] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:26.244 [2024-06-07 14:39:49.819362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807756 ] 00:37:26.244 EAL: No free 2048 kB hugepages reported on node 1 00:37:26.505 [2024-06-07 14:39:49.898139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.505 [2024-06-07 14:39:49.926305] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:27.106 14:39:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:27.680 nvme0n1 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:27.680 14:39:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:27.680 Running I/O for 2 seconds... 00:37:27.680 [2024-06-07 14:39:51.166254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190eb760 00:37:27.680 [2024-06-07 14:39:51.167871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.167898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.176050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2d80 00:37:27.680 [2024-06-07 14:39:51.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.177153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.188523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.189607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.189624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.200284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.201368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.212057] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.213152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.213168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.223829] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.224906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.224923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.235595] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.236678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.236694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.247389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.248472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.248488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.259160] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.260238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.260254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.270921] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.271998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.272014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.282679] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.283746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.283762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.680 [2024-06-07 14:39:51.294414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.680 [2024-06-07 14:39:51.295445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.680 [2024-06-07 14:39:51.295461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.681 [2024-06-07 14:39:51.306134] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.681 [2024-06-07 14:39:51.307206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.681 [2024-06-07 14:39:51.307222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.681 [2024-06-07 14:39:51.317857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.681 [2024-06-07 14:39:51.318937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.681 [2024-06-07 14:39:51.318953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.329615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.330682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.330698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.341336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.342413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.342429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.353069] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.354144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.354163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.364809] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.365885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.365901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.376548] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.377588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.377604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.388262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.389334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.389350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.399979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.401058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.401074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.411700] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.412772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.412788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.423545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.424613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.424628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.435287] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.436354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.436370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.447036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.448068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.448084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.458788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.459842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.459858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.470510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.471595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.471610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.482242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6300 00:37:27.943 [2024-06-07 14:39:51.483317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.483332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.493212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f35f0 00:37:27.943 [2024-06-07 14:39:51.494263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.494279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.505639] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6b70 00:37:27.943 [2024-06-07 14:39:51.506696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.506712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.517386] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6b70 00:37:27.943 [2024-06-07 14:39:51.518457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.518472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.529102] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e6b70 00:37:27.943 [2024-06-07 14:39:51.530157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.530173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.540859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190ed920 00:37:27.943 [2024-06-07 14:39:51.541917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.552635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190ec840 00:37:27.943 [2024-06-07 14:39:51.553686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.564431] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190fc560 00:37:27.943 [2024-06-07 14:39:51.565457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.565473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.575408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f6458 00:37:27.943 [2024-06-07 14:39:51.576431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.943 [2024-06-07 14:39:51.576446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:27.943 [2024-06-07 14:39:51.588022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f0ff8 00:37:27.943 [2024-06-07 14:39:51.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:27.944 [2024-06-07 14:39:51.589098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.599021] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e73e0 00:37:28.205 [2024-06-07 14:39:51.600055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.600071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.611457] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.612483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.612499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.623171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.624206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.624221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.634913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.635947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.635963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.646613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.647645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.647661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.658341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.659368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.659386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.670053] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.671088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.671103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.681781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.682807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.682823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.693492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.694540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.705209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.706470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.717131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.718162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.718178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.728843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.729853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.729868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.740576] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.741611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.741626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.752324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.753358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.753375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.764025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.765057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.765073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.775742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.776772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.776787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.787436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.788440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.788456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.799157] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.800187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.800205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.810867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.811907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.205 [2024-06-07 14:39:51.811923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.205 [2024-06-07 14:39:51.822587] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.205 [2024-06-07 14:39:51.823617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.206 [2024-06-07 14:39:51.823632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.206 [2024-06-07 14:39:51.834292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.206 [2024-06-07 14:39:51.835321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.206 [2024-06-07 14:39:51.835338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.206 [2024-06-07 14:39:51.846008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.206 [2024-06-07 14:39:51.847029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.206 [2024-06-07 14:39:51.847045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.857709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.858739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.858756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.869461] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.870495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.870511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.881176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.882213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.882228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.892899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.893933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.893948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.904640] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.905667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.905682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.916352] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.917385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.928063] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.929068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.939789] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.940820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.940836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.951492] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.952527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.952543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.963214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.964245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.964264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.974907] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.975942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.975957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.986647] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.987678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.987694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:51.998357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:51.999371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:51.999387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.010054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.011086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.021764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.022794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.022810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.033506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.034544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.034560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.045237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.046266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.046282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.056979] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.058014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.058030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.068723] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.069754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.069770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.080477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.081516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.081532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.092179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.093215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.093231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.467 [2024-06-07 14:39:52.103889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.467 [2024-06-07 14:39:52.104921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.467 [2024-06-07 14:39:52.104937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.115610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.729 [2024-06-07 14:39:52.116648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.116665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.127340] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.729 [2024-06-07 14:39:52.128368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.128384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.139059] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.729 [2024-06-07 14:39:52.140098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.140114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.149989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e7c50 00:37:28.729 [2024-06-07 14:39:52.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.151021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.162436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2948 00:37:28.729 [2024-06-07 14:39:52.163448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.163464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.174141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2948 00:37:28.729 [2024-06-07 14:39:52.175172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.175188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.185854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2948 00:37:28.729 [2024-06-07 14:39:52.186836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.186851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.197578] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2948 00:37:28.729 [2024-06-07 14:39:52.198601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.198617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.209281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f2948 00:37:28.729 [2024-06-07 14:39:52.210251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.220192] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f31b8 00:37:28.729 [2024-06-07 14:39:52.221131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.221146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.232745] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f20d8 00:37:28.729 [2024-06-07 14:39:52.233730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.233746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.243697] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e8d30 00:37:28.729 [2024-06-07 14:39:52.244670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.244685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.256212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e7c50 00:37:28.729 [2024-06-07 14:39:52.257148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.257164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.267913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f1868 00:37:28.729 [2024-06-07 14:39:52.268879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.268897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.279607] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.280585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.280601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.291349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.292314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.292330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.303056] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.304029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.304045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.314776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.315747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.315762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.326484] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.327441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.327457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.338225] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.339189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.339207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.349925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190f8e88 00:37:28.729 [2024-06-07 14:39:52.350855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.350871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.361609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190de470 00:37:28.729 [2024-06-07 14:39:52.362573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:28.729 [2024-06-07 14:39:52.373359] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190de470 00:37:28.729 [2024-06-07 14:39:52.374320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.729 [2024-06-07 14:39:52.374336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.385029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.385984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.386000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.396755] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.397706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.397722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.408466] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.409416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.409431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.420167] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.421119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.421135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.431981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.432931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.432947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.443699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.444645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.444661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.455435] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.456369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.456384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.467143] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.991 [2024-06-07 14:39:52.468094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.991 [2024-06-07 14:39:52.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.991 [2024-06-07 14:39:52.478845] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.479793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.479809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.490569] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.491520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.491535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.502302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.503249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.503265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.514006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.514961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.514976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.525739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.526695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.526712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.537493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.538445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.538460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.549218] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.550167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.550182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.560953] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.561908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.561924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.572699] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.573649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.573667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.584421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.585372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.585389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.596138] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.597089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.597105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.607936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.608892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.608908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.619666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190e88f8 00:37:28.992 [2024-06-07 14:39:52.620625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.620641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:28.992 [2024-06-07 14:39:52.631494] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:28.992 [2024-06-07 14:39:52.632407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.992 [2024-06-07 14:39:52.632423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.643257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.644153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.644169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.654981] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.655930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.655946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.666698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.667643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.667659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.678437] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.679369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.679385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.690161] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.691111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.691127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.701933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.702880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.702896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.713857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.714794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.714810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.725598] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.726541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.726557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.737330] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.738261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.738277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.749072] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.750012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.750028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.760828] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.761773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.761789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.772567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.773506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.773522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.784291] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.785235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.785251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.796025] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.796969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.807738] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.808677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.808693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.819478] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.820432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.820448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.831214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.832147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.832163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.842948] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.843849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.843866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.854675] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.855622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.855638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.866401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.867339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.867355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.878133] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.879078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.879096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.254 [2024-06-07 14:39:52.889872] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.254 [2024-06-07 14:39:52.890770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.254 [2024-06-07 14:39:52.890786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.901610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.902552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.902568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.913374] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.914311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.914327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.925127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.926085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.926102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.936873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.937817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.937833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.948603] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.949507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.949522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.960353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.961285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.961301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.972067] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.973008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.973024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.983787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.984729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.984745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:52.995516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:52.996457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:52.996473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.007267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.008204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.008220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.018985] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.019927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.019943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.030734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.031679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.031695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.042452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.043364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.043379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.054183] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.055130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.055146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.065913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.066841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.066857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.077670] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.516 [2024-06-07 14:39:53.078616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.516 [2024-06-07 14:39:53.078632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.516 [2024-06-07 14:39:53.089412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.090347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.090363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 [2024-06-07 14:39:53.101150] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.102094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.102110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 [2024-06-07 14:39:53.112873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.113837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.113853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 [2024-06-07 14:39:53.124633] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.125579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.125595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 [2024-06-07 14:39:53.136379] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.137294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.137309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 [2024-06-07 14:39:53.148124] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c8da0) with pdu=0x2000190dece0 00:37:29.517 [2024-06-07 14:39:53.149070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.517 [2024-06-07 14:39:53.149085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.517 00:37:29.517 Latency(us) 00:37:29.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.517 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.517 nvme0n1 : 2.00 21724.34 84.86 0.00 0.00 5884.38 3604.48 13489.49 00:37:29.517 =================================================================================================================== 00:37:29.517 Total : 21724.34 84.86 0.00 0.00 5884.38 3604.48 13489.49 00:37:29.517 0 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:29.778 | .driver_specific 00:37:29.778 | .nvme_error 00:37:29.778 | .status_code 00:37:29.778 | .command_transient_transport_error' 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 807756 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 807756 ']' 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 807756 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 807756 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 807756' 00:37:29.778 killing process with pid 807756 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 807756 00:37:29.778 Received shutdown signal, test time was about 2.000000 seconds 00:37:29.778 00:37:29.778 Latency(us) 00:37:29.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.778 =================================================================================================================== 00:37:29.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:29.778 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 807756 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=808443 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 808443 /var/tmp/bperf.sock 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 808443 ']' 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:30.040 14:39:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.040 [2024-06-07 14:39:53.550650] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:30.040 [2024-06-07 14:39:53.550708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid808443 ] 00:37:30.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:30.040 Zero copy mechanism will not be used. 00:37:30.040 EAL: No free 2048 kB hugepages reported on node 1 00:37:30.040 [2024-06-07 14:39:53.628588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.040 [2024-06-07 14:39:53.656726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:30.981 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:31.242 nvme0n1 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:31.242 14:39:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:31.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:31.503 Zero copy mechanism will not be used. 00:37:31.503 Running I/O for 2 seconds... 00:37:31.503 [2024-06-07 14:39:54.974471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:54.974825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:54.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:54.979428] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:54.979648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:54.979666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:54.984087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:54.984306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:54.984324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:54.988641] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:54.988850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:54.988873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:54.994851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:54.995074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:54.995091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.003370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.003715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.003734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.012037] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.020250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.020463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.020480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.026676] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.026886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.026902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.035770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.036127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.036145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.044983] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.045069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.045085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.054164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.054405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.054422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.063714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.063918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.063934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.072750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.073106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.073124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.081406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.081750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.081768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.090695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.090766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.090781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.097846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.098065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.104615] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.104677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.104692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.114281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.114599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.114617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.123368] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.123718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.123736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.131756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.132010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.132028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.141475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.503 [2024-06-07 14:39:55.141693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.503 [2024-06-07 14:39:55.141710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.503 [2024-06-07 14:39:55.149221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.149579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.149597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.158766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.159125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.166355] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.166659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.166676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.175833] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.176182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.176204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.184583] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.184923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.193957] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.194201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.194217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.203487] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.203835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.203852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.214165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.214516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.214537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.223505] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.223836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.223853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.232860] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.233313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.233332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.242396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.242475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.242490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.252377] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.252689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.252706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.261432] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.261768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.261785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.270095] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.270394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.270411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.278567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.278770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.278787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.286943] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.765 [2024-06-07 14:39:55.287306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.765 [2024-06-07 14:39:55.287323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.765 [2024-06-07 14:39:55.295923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.296207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.296224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.304570] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.304785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.304802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.312764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.313053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.320969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.321299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.329177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.329573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.329592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.337840] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.338060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.338077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.345693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.346077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.346096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.355156] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.355448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.355465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.365114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.365444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.365466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.376093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.376456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.376474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.387558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.387898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.387915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.399452] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.399781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.399798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:31.766 [2024-06-07 14:39:55.410541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:31.766 [2024-06-07 14:39:55.410823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.766 [2024-06-07 14:39:55.410840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.422854] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.423199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.423216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.434894] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.435220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.435237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.445797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.446100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.446118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.457793] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.458117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.458135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.467281] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.467601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.467618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.475387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.475750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.475768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.482936] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.483225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.483242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.489831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.490096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.498526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.498817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.498835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.505987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.028 [2024-06-07 14:39:55.506258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.028 [2024-06-07 14:39:55.506276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.028 [2024-06-07 14:39:55.511552] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.511857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.511874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.519575] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.519879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.519896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.529750] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.530099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.530116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.541226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.541474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.541491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.552536] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.552744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.552761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.564026] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.564218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.564235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.575420] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.575717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.575735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.587479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.587810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.587828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.599758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.600082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.611742] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.612099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.612116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.623339] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.623680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.634413] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.634591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.634611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.643257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.643432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.643448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.649350] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.649525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.649541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.657314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.657596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.657613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.663551] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.663860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.663877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.029 [2024-06-07 14:39:55.669621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.029 [2024-06-07 14:39:55.669912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.029 [2024-06-07 14:39:55.669929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.676958] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.677133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.677149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.683365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.683664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.683681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.692402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.692579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.692596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.698799] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.698978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.698994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.704524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.704697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.704713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.710421] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.710601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.710617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.719543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.719849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.719867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.729173] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.729544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.729562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.291 [2024-06-07 14:39:55.740175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.291 [2024-06-07 14:39:55.740586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.291 [2024-06-07 14:39:55.740603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.752269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.752624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.752641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.764041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.764362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.764380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.774254] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.774540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.781973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.782208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.782224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.788825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.789016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.794722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.794980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.794997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.800348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.800722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.800739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.807349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.807635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.807652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.813661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.813924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.813941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.820855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.821154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.821171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.828285] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.828628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.835760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.836073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.836093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.842686] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.842862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.842878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.850191] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.850384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.850400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.855521] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.855704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.855721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.863126] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.863403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.863422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.871996] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.872185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.872206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.878643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.878939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.878957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.884262] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.884440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.884456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.889546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.889833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.889851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.896209] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.896497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.896515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.902659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.902834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.902850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.907469] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.907644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.907661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.292 [2024-06-07 14:39:55.912691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.292 [2024-06-07 14:39:55.912988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.292 [2024-06-07 14:39:55.913005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.293 [2024-06-07 14:39:55.917800] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.293 [2024-06-07 14:39:55.917972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-06-07 14:39:55.917989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.293 [2024-06-07 14:39:55.922532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.293 [2024-06-07 14:39:55.922709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-06-07 14:39:55.922725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.293 [2024-06-07 14:39:55.927654] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.293 [2024-06-07 14:39:55.927867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-06-07 14:39:55.927883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.293 [2024-06-07 14:39:55.936406] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.293 [2024-06-07 14:39:55.936720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.293 [2024-06-07 14:39:55.936737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.944523] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.944863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.944880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.955288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.955675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.955693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.966758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.967092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.967109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.977814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.978128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.978144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.985305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.985479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.985496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:55.994127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:55.994431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:55.994448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:56.002870] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:56.003064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:56.003080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:56.012634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:56.012806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:56.012823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:56.022510] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:56.022771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.554 [2024-06-07 14:39:56.022788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.554 [2024-06-07 14:39:56.033526] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.554 [2024-06-07 14:39:56.033854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.033871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.044695] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.044987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.056035] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.056277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.056294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.067224] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.067618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.067635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.078305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.078689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.078706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.089635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.089980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.089997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.101007] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.101374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.113176] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.113544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.113561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.125094] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.125409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.125426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.136255] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.136495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.136513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.146568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.146859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.146876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.156634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.156957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.156974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.164456] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.164726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.171629] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.171919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.171936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.178513] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.178687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.178704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.185083] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.185262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.194158] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.194530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.555 [2024-06-07 14:39:56.200049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.555 [2024-06-07 14:39:56.200229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.555 [2024-06-07 14:39:56.200248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.205612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.205907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.211557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.211862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.211879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.217857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.218133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.223303] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.223478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.223494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.228790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.229057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.229074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.234008] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.234298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.234315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.241757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.241932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.241949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.251306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.251572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.251588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.256097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.256289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.256306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.261926] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.262198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.262216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.270790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.271068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.271085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.276563] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.276738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.276754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.283490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.283807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.283824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.289836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.290133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.290150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.296148] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.296462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.817 [2024-06-07 14:39:56.296480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.817 [2024-06-07 14:39:56.300480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.817 [2024-06-07 14:39:56.300764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.300782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.305044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.305340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.305364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.311613] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.311822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.311839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.316146] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.316419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.316436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.320382] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.320555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.320573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.327110] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.327384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.327402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.335117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.335299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.335315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.341221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.341568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.347399] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.347650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.355702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.355980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.355998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.361573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.361844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.361864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.366266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.366441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.366457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.371786] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.372050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.372068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.377843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.378177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.378199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.383580] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.383755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.383771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.391634] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.391823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.391840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.399145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.399589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.399607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.405375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.405550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.405566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.410153] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.410342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.410358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.415708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.415890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.415907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.421112] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.421384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.421403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.426606] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.426917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.426934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.434233] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.434546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.434563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.442103] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.442406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.442423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.450708] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.451025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.451042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:32.818 [2024-06-07 14:39:56.459811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:32.818 [2024-06-07 14:39:56.460144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.818 [2024-06-07 14:39:56.460162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.468859] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.469129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.469147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.477749] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.478139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.478156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.485691] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.485955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.485972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.494171] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.494535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.494552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.501702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.501877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.501894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.509087] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.509265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.509282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.515353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.515666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.515683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.523380] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.523732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.523750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.530933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.531220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.531237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.539541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.539748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.539765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.548517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.548846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.548867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.557220] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.557536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.557553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.567030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.567380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.567398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.575864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.576040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.576057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.584831] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.585230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.585248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.591457] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.591631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.591647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.596470] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.596647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.596664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.602703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.603006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.609681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.610014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.610031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.615459] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.615759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.615776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.621765] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.080 [2024-06-07 14:39:56.622039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.080 [2024-06-07 14:39:56.622056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.080 [2024-06-07 14:39:56.629498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.629700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.634416] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.634591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.634608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.641423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.641756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.641775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.647497] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.647864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.647882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.655280] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.655472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.662402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.662717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.662735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.668328] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.668710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.668731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.675221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.675545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.675562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.682932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.683124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.683141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.688790] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.689054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.689072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.693650] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.693824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.693841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.700643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.700937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.700955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.706911] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.707206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.707224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.715050] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.715229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.715246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.081 [2024-06-07 14:39:56.721346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.081 [2024-06-07 14:39:56.721632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.081 [2024-06-07 14:39:56.721650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.727960] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.728238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.728257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.734813] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.734988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.735004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.739673] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.739849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.739865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.746227] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.746532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.746549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.751265] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.751440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.751456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.757602] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.757909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.757927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.763410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.763714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.763732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.768878] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.769144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.769162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.774586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.774760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.774776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.781991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.782381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.343 [2024-06-07 14:39:56.789882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.343 [2024-06-07 14:39:56.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.343 [2024-06-07 14:39:56.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.795541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.795716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.801798] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.801974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.801991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.806549] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.806724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.806740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.811612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.811910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.811928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.817873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.818047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.818064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.824808] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.825131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.825148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.829599] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.829774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.829793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.836542] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.836898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.836916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.845054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.845372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.845389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.850661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.850834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.850851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.856376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.856569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.861140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.861472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.868127] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.868394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.868412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.873657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.873831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.873848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.879546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.879722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.879738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.886825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.887004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.887021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.893688] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.893994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.894012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.899814] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.900005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.904557] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.904854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.904871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.910672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.910939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.910957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.916779] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.917057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.917075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.925517] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.925948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.925966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.934334] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.934633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.934651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.941352] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.941628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.948205] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.948381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.948398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.955396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.955654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.344 [2024-06-07 14:39:56.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:33.344 [2024-06-07 14:39:56.962120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c9210) with pdu=0x2000190fef90 00:37:33.344 [2024-06-07 14:39:56.962427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.345 [2024-06-07 14:39:56.962444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:33.345 00:37:33.345 Latency(us) 00:37:33.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.345 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:33.345 nvme0n1 : 2.00 3988.17 498.52 0.00 0.00 4005.38 1870.51 12397.23 00:37:33.345 =================================================================================================================== 00:37:33.345 Total : 3988.17 498.52 0.00 0.00 4005.38 1870.51 12397.23 00:37:33.345 0 00:37:33.606 14:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:33.606 14:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:33.606 14:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:33.606 | .driver_specific 00:37:33.606 | .nvme_error 00:37:33.606 | .status_code 00:37:33.606 | .command_transient_transport_error' 00:37:33.606 14:39:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 257 > 0 )) 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 808443 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 808443 ']' 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 808443 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 808443 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 808443' 00:37:33.606 killing process with pid 808443 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 808443 00:37:33.606 Received shutdown signal, test time was about 2.000000 seconds 00:37:33.606 00:37:33.606 Latency(us) 00:37:33.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.606 =================================================================================================================== 00:37:33.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:33.606 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 808443 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 806046 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 806046 ']' 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 806046 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 806046 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 806046' 00:37:33.867 killing process with pid 806046 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 806046 00:37:33.867 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 806046 00:37:33.867 00:37:33.867 real 0m16.155s 00:37:33.867 user 0m31.844s 00:37:33.867 sys 0m3.294s 00:37:33.868 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:33.868 14:39:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:33.868 ************************************ 00:37:33.868 END TEST nvmf_digest_error 00:37:33.868 ************************************ 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:34.129 rmmod nvme_tcp 00:37:34.129 rmmod nvme_fabrics 00:37:34.129 rmmod nvme_keyring 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 806046 ']' 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 806046 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 806046 ']' 00:37:34.129 14:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 806046 00:37:34.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (806046) - No such process 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 806046 is not found' 00:37:34.130 Process with pid 806046 is not found 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:34.130 14:39:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.048 14:39:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:36.048 00:37:36.048 real 0m43.036s 00:37:36.048 user 1m6.280s 00:37:36.048 sys 0m12.778s 00:37:36.048 14:39:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:36.048 14:39:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:36.048 ************************************ 00:37:36.048 END TEST nvmf_digest 00:37:36.048 ************************************ 00:37:36.309 14:39:59 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:37:36.309 14:39:59 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:37:36.309 14:39:59 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:37:36.309 14:39:59 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:36.309 14:39:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:36.309 14:39:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:36.309 14:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:36.309 ************************************ 00:37:36.309 START TEST nvmf_bdevperf 00:37:36.309 ************************************ 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:36.309 * Looking for test storage... 00:37:36.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:37:36.309 14:39:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:44.449 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:44.449 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:44.449 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:44.450 Found net devices under 0000:31:00.0: cvl_0_0 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:44.450 Found net devices under 0000:31:00.1: cvl_0_1 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:44.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:44.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:37:44.450 00:37:44.450 --- 10.0.0.2 ping statistics --- 00:37:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.450 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:44.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:44.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:37:44.450 00:37:44.450 --- 10.0.0.1 ping statistics --- 00:37:44.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.450 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:37:44.450 14:40:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=813807 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 813807 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 813807 ']' 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:44.450 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 [2024-06-07 14:40:08.080167] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:44.450 [2024-06-07 14:40:08.080221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.712 EAL: No free 2048 kB hugepages reported on node 1 00:37:44.712 [2024-06-07 14:40:08.170589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:44.712 [2024-06-07 14:40:08.207389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.712 [2024-06-07 14:40:08.207438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.712 [2024-06-07 14:40:08.207446] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.712 [2024-06-07 14:40:08.207453] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.712 [2024-06-07 14:40:08.207459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.712 [2024-06-07 14:40:08.207583] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.712 [2024-06-07 14:40:08.207740] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.712 [2024-06-07 14:40:08.207741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.283 [2024-06-07 14:40:08.901406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:45.283 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.543 Malloc0 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:45.543 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.544 [2024-06-07 14:40:08.966518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:45.544 { 00:37:45.544 "params": { 00:37:45.544 "name": "Nvme$subsystem", 00:37:45.544 "trtype": "$TEST_TRANSPORT", 00:37:45.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.544 "adrfam": "ipv4", 00:37:45.544 "trsvcid": "$NVMF_PORT", 00:37:45.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.544 "hdgst": ${hdgst:-false}, 00:37:45.544 "ddgst": ${ddgst:-false} 00:37:45.544 }, 00:37:45.544 "method": "bdev_nvme_attach_controller" 00:37:45.544 } 00:37:45.544 EOF 00:37:45.544 )") 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:45.544 14:40:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:45.544 "params": { 00:37:45.544 "name": "Nvme1", 00:37:45.544 "trtype": "tcp", 00:37:45.544 "traddr": "10.0.0.2", 00:37:45.544 "adrfam": "ipv4", 00:37:45.544 "trsvcid": "4420", 00:37:45.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.544 "hdgst": false, 00:37:45.544 "ddgst": false 00:37:45.544 }, 00:37:45.544 "method": "bdev_nvme_attach_controller" 00:37:45.544 }' 00:37:45.544 [2024-06-07 14:40:09.020471] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:45.544 [2024-06-07 14:40:09.020519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813846 ] 00:37:45.544 EAL: No free 2048 kB hugepages reported on node 1 00:37:45.544 [2024-06-07 14:40:09.084473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.544 [2024-06-07 14:40:09.115931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.804 Running I/O for 1 seconds... 00:37:46.748 00:37:46.748 Latency(us) 00:37:46.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:46.748 Verification LBA range: start 0x0 length 0x4000 00:37:46.748 Nvme1n1 : 1.01 9038.20 35.31 0.00 0.00 14098.10 3058.35 13489.49 00:37:46.748 =================================================================================================================== 00:37:46.748 Total : 9038.20 35.31 0.00 0.00 14098.10 3058.35 13489.49 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=814174 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:47.059 { 00:37:47.059 "params": { 00:37:47.059 "name": "Nvme$subsystem", 00:37:47.059 "trtype": "$TEST_TRANSPORT", 00:37:47.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:47.059 "adrfam": "ipv4", 00:37:47.059 "trsvcid": "$NVMF_PORT", 00:37:47.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:47.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:47.059 "hdgst": ${hdgst:-false}, 00:37:47.059 "ddgst": ${ddgst:-false} 00:37:47.059 }, 00:37:47.059 "method": "bdev_nvme_attach_controller" 00:37:47.059 } 00:37:47.059 EOF 00:37:47.059 )") 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:37:47.059 14:40:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:47.059 "params": { 00:37:47.059 "name": "Nvme1", 00:37:47.059 "trtype": "tcp", 00:37:47.059 "traddr": "10.0.0.2", 00:37:47.059 "adrfam": "ipv4", 00:37:47.059 "trsvcid": "4420", 00:37:47.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:47.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:47.059 "hdgst": false, 00:37:47.059 "ddgst": false 00:37:47.059 }, 00:37:47.059 "method": "bdev_nvme_attach_controller" 00:37:47.059 }' 00:37:47.059 [2024-06-07 14:40:10.438692] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:47.059 [2024-06-07 14:40:10.438752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid814174 ] 00:37:47.059 EAL: No free 2048 kB hugepages reported on node 1 00:37:47.059 [2024-06-07 14:40:10.502503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.059 [2024-06-07 14:40:10.532962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.319 Running I/O for 15 seconds... 00:37:49.863 14:40:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 813807 00:37:49.863 14:40:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:49.863 [2024-06-07 14:40:13.405692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.405979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.405991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:49.864 [2024-06-07 14:40:13.406482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.864 [2024-06-07 14:40:13.406524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-06-07 14:40:13.406531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.406987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.406997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.865 [2024-06-07 14:40:13.407177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.865 [2024-06-07 14:40:13.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.866 [2024-06-07 14:40:13.407833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.866 [2024-06-07 14:40:13.407840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.867 [2024-06-07 14:40:13.407956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.407965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102af10 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.407974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:49.867 [2024-06-07 14:40:13.407980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:49.867 [2024-06-07 14:40:13.407987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97192 len:8 PRP1 0x0 PRP2 0x0 00:37:49.867 [2024-06-07 14:40:13.407995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:49.867 [2024-06-07 14:40:13.408032] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x102af10 was disconnected and freed. reset controller. 00:37:49.867 [2024-06-07 14:40:13.411633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.411679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.412587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.412624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.412635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.412872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.413092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.413100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.413109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.416618] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.425706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.426301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.426340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.426352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.426591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.426811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.426819] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.426827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.430335] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.439612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.440281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.440319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.440331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.440575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.440795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.440803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.440811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.444332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.453402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.454059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.454097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.454108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.454352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.454573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.454581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.454589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.458084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.467152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.467831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.467870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.467880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.468117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.468345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.468354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.468362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.471856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.480933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.481611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.481649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.481660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.481896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.482116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.482125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.482136] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.485641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:49.867 [2024-06-07 14:40:13.494710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:49.867 [2024-06-07 14:40:13.495309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:49.867 [2024-06-07 14:40:13.495346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:49.867 [2024-06-07 14:40:13.495358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:49.867 [2024-06-07 14:40:13.495597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:49.867 [2024-06-07 14:40:13.495817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:49.867 [2024-06-07 14:40:13.495826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:49.867 [2024-06-07 14:40:13.495833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:49.867 [2024-06-07 14:40:13.499375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.508459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.509034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.509072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.509083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.509328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.509550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.509558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.509565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.513060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.522339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.522767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.522786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.522794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.523011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.523234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.523243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.523250] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.526741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.536222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.536801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.536821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.536828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.537044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.537265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.537274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.537280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.540769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.550057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.550580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.550618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.550629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.550865] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.551085] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.551095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.551104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.554613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.563893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.564443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.564481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.564493] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.564730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.564949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.564957] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.564964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.568466] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.577743] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.578417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.578455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.578466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.578702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.578926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.578934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.578942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.582447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.591519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.592062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.592081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.592088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.592311] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.592528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.592535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.592542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.596116] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.129 [2024-06-07 14:40:13.605393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.129 [2024-06-07 14:40:13.606039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.129 [2024-06-07 14:40:13.606077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.129 [2024-06-07 14:40:13.606088] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.129 [2024-06-07 14:40:13.606332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.129 [2024-06-07 14:40:13.606553] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.129 [2024-06-07 14:40:13.606561] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.129 [2024-06-07 14:40:13.606568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.129 [2024-06-07 14:40:13.610063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.619136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.619798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.619835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.619846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.620082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.620310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.620319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.620327] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.623833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.632899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.633399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.633435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.633447] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.633682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.633902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.633910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.633917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.637422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.646713] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.647295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.647333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.647346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.647585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.647805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.647813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.647820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.651327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.660606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.661251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.661289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.661301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.661539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.661759] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.661775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.661782] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.665287] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.674359] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.675031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.675069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.675084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.675327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.675547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.675556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.675563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.679058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.688137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.688786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.688824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.688835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.689070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.689297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.689306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.689314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.692809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.701882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.702571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.702609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.702620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.702856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.703076] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.703084] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.703092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.706841] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.715720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.716317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.716355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.716368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.716606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.716827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.716839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.716847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.720353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.729631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.730296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.730334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.730347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.730584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.730804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.730813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.730820] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.734325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.743404] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.744056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.130 [2024-06-07 14:40:13.744093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.130 [2024-06-07 14:40:13.744104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.130 [2024-06-07 14:40:13.744348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.130 [2024-06-07 14:40:13.744568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.130 [2024-06-07 14:40:13.744576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.130 [2024-06-07 14:40:13.744584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.130 [2024-06-07 14:40:13.748080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.130 [2024-06-07 14:40:13.757152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.130 [2024-06-07 14:40:13.757835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.131 [2024-06-07 14:40:13.757873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.131 [2024-06-07 14:40:13.757884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.131 [2024-06-07 14:40:13.758120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.131 [2024-06-07 14:40:13.758349] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.131 [2024-06-07 14:40:13.758358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.131 [2024-06-07 14:40:13.758365] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.131 [2024-06-07 14:40:13.761866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.131 [2024-06-07 14:40:13.770946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.131 [2024-06-07 14:40:13.771597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.131 [2024-06-07 14:40:13.771635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.131 [2024-06-07 14:40:13.771645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.131 [2024-06-07 14:40:13.771881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.131 [2024-06-07 14:40:13.772101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.131 [2024-06-07 14:40:13.772109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.131 [2024-06-07 14:40:13.772117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.391 [2024-06-07 14:40:13.775620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.391 [2024-06-07 14:40:13.784693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.391 [2024-06-07 14:40:13.785294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.391 [2024-06-07 14:40:13.785332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.391 [2024-06-07 14:40:13.785344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.391 [2024-06-07 14:40:13.785583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.391 [2024-06-07 14:40:13.785802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.391 [2024-06-07 14:40:13.785811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.391 [2024-06-07 14:40:13.785819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.391 [2024-06-07 14:40:13.789329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.391 [2024-06-07 14:40:13.798609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.391 [2024-06-07 14:40:13.799281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.391 [2024-06-07 14:40:13.799319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.391 [2024-06-07 14:40:13.799330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.391 [2024-06-07 14:40:13.799565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.391 [2024-06-07 14:40:13.799785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.799794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.799801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.803303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.812377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.812925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.812963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.812973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.813223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.813444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.813452] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.813460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.816955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.826233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.826777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.826794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.826802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.827018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.827240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.827249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.827256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.830747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.840018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.840560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.840576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.840583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.840798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.841014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.841022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.841028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.844533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.853807] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.854504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.854542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.854554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.854790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.855010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.855019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.855031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.858536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.867606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.868179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.868202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.868211] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.868428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.868643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.868652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.868659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.872145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.881424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.881992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.882007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.882015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.882236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.882452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.882460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.882467] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.885957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.895249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.895827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.895842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.895850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.896065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.896285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.896294] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.896302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.899795] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.909080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.909755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.909794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.909805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.910040] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.910271] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.910280] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.910287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.913843] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.922947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.923614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.923652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.923663] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.923899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.924119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.924127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.924135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.392 [2024-06-07 14:40:13.927642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.392 [2024-06-07 14:40:13.936718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.392 [2024-06-07 14:40:13.937396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.392 [2024-06-07 14:40:13.937434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.392 [2024-06-07 14:40:13.937445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.392 [2024-06-07 14:40:13.937680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.392 [2024-06-07 14:40:13.937900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.392 [2024-06-07 14:40:13.937908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.392 [2024-06-07 14:40:13.937916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:13.941418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:13.950492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:13.951165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:13.951209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:13.951222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:13.951459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:13.951683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:13.951691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:13.951699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:13.955199] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:13.964268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:13.964943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:13.964980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:13.964991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:13.965236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:13.965457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:13.965465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:13.965472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:13.968971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:13.978041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:13.978630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:13.978649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:13.978656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:13.978872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:13.979088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:13.979095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:13.979102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:13.982596] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:13.991865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:13.992413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:13.992429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:13.992436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:13.992652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:13.992867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:13.992875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:13.992883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:13.996386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:14.005652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:14.006116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:14.006131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:14.006139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:14.006360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:14.006576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:14.006583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:14.006590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:14.010077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:14.019554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:14.020082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:14.020097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:14.020104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:14.020324] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:14.020540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:14.020549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:14.020555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.393 [2024-06-07 14:40:14.024042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.393 [2024-06-07 14:40:14.033312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.393 [2024-06-07 14:40:14.033856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.393 [2024-06-07 14:40:14.033871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.393 [2024-06-07 14:40:14.033878] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.393 [2024-06-07 14:40:14.034093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.393 [2024-06-07 14:40:14.034316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.393 [2024-06-07 14:40:14.034325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.393 [2024-06-07 14:40:14.034332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.654 [2024-06-07 14:40:14.037823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.654 [2024-06-07 14:40:14.047110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.654 [2024-06-07 14:40:14.047643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.654 [2024-06-07 14:40:14.047662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.654 [2024-06-07 14:40:14.047670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.654 [2024-06-07 14:40:14.047885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.654 [2024-06-07 14:40:14.048100] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.654 [2024-06-07 14:40:14.048107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.654 [2024-06-07 14:40:14.048114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.654 [2024-06-07 14:40:14.051606] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.654 [2024-06-07 14:40:14.060870] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.654 [2024-06-07 14:40:14.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.654 [2024-06-07 14:40:14.061579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.654 [2024-06-07 14:40:14.061590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.654 [2024-06-07 14:40:14.061826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.654 [2024-06-07 14:40:14.062046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.654 [2024-06-07 14:40:14.062054] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.654 [2024-06-07 14:40:14.062062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.654 [2024-06-07 14:40:14.065564] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.654 [2024-06-07 14:40:14.074639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.654 [2024-06-07 14:40:14.075308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.654 [2024-06-07 14:40:14.075346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.654 [2024-06-07 14:40:14.075356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.075592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.075812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.075821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.075828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.079328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.088399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.089043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.089081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.089092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.089337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.089562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.089572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.089580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.093081] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.102157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.102680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.102717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.102728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.102964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.103183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.103192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.103209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.106708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.115980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.116607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.116644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.116655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.116891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.117110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.117119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.117126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.120631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.129735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.130414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.130451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.130463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.130698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.130918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.130926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.130934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.134438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.143519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.144202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.144239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.144250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.144485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.144706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.144714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.144721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.148222] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.157291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.157939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.157976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.157987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.158233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.158454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.158462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.158470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.161970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.171040] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.171678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.171715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.171726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.171962] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.172181] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.172189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.172207] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.175708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.184792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.185423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.185461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.185476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.185712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.185932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.185940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.185947] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.655 [2024-06-07 14:40:14.189450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.655 [2024-06-07 14:40:14.198723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.655 [2024-06-07 14:40:14.199412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.655 [2024-06-07 14:40:14.199450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.655 [2024-06-07 14:40:14.199461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.655 [2024-06-07 14:40:14.199697] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.655 [2024-06-07 14:40:14.199917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.655 [2024-06-07 14:40:14.199925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.655 [2024-06-07 14:40:14.199933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.203445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.212512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.213116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.213154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.213165] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.213409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.213630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.213638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.213646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.217140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.226415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.227063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.227100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.227111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.227355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.227576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.227590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.227597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.231096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.240163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.240700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.240737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.240748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.240984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.241213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.241222] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.241230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.244737] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.254010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.254564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.254602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.254613] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.254849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.255069] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.255077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.255085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.258589] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.267864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.268524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.268562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.268573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.268809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.269029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.269037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.269045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.272547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.281614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.282263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.282301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.282313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.282550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.282770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.282778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.282786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.286290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.656 [2024-06-07 14:40:14.295356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.656 [2024-06-07 14:40:14.296007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.656 [2024-06-07 14:40:14.296045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.656 [2024-06-07 14:40:14.296056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.656 [2024-06-07 14:40:14.296302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.656 [2024-06-07 14:40:14.296523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.656 [2024-06-07 14:40:14.296531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.656 [2024-06-07 14:40:14.296539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.656 [2024-06-07 14:40:14.300039] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.917 [2024-06-07 14:40:14.309121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.917 [2024-06-07 14:40:14.309771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.917 [2024-06-07 14:40:14.309809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.917 [2024-06-07 14:40:14.309820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.917 [2024-06-07 14:40:14.310056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.917 [2024-06-07 14:40:14.310286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.917 [2024-06-07 14:40:14.310295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.917 [2024-06-07 14:40:14.310302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.313799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.322863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.323536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.323573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.323584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.323829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.324049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.324057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.324064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.327567] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.336671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.337358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.337396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.337406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.337642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.337863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.337871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.337879] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.341382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.350461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.351113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.351150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.351161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.351406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.351628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.351636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.351644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.355138] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.364208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.364751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.364769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.364777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.364993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.365216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.365224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.365235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.368728] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.377997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.378583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.378599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.378606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.378822] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.379037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.379045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.379051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.382543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.391823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.392384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.392400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.392407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.392623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.392840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.392848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.392855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.396352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.405643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.406206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.406222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.406229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.406445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.406660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.406668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.406674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.410171] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.419459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.420040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.420055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.918 [2024-06-07 14:40:14.420062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.918 [2024-06-07 14:40:14.420284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.918 [2024-06-07 14:40:14.420501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.918 [2024-06-07 14:40:14.420509] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.918 [2024-06-07 14:40:14.420515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.918 [2024-06-07 14:40:14.424007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.918 [2024-06-07 14:40:14.433308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.918 [2024-06-07 14:40:14.433913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.918 [2024-06-07 14:40:14.433951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.433962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.434207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.434428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.434436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.434444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.437946] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.447120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.447676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.447695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.447704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.447920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.448136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.448144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.448151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.451653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.460937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.461446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.461463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.461472] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.461690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.461910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.461918] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.461925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.465426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.474711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.475274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.475290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.475297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.475513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.475728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.475736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.475743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.479245] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.488526] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.489077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.489092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.489099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.489320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.489536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.489544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.489551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.493043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.502341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.502895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.502933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.502944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.503180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.503409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.503419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.503426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.506932] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.516229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.516817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.516835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.516843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.517059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.517282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.517290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.517297] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.520796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.530078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.530740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.530777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.530789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.531025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.531254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.531263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.531271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.534774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.543904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.544457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.544476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.544484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.544701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.544917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.544925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.544932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.548435] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:50.919 [2024-06-07 14:40:14.557780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:50.919 [2024-06-07 14:40:14.558332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:50.919 [2024-06-07 14:40:14.558349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:50.919 [2024-06-07 14:40:14.558360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:50.919 [2024-06-07 14:40:14.558577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:50.919 [2024-06-07 14:40:14.558793] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:50.919 [2024-06-07 14:40:14.558800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:50.919 [2024-06-07 14:40:14.558807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:50.919 [2024-06-07 14:40:14.562309] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.571600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.572175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.572190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.572204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.572420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.572636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.572643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.572650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.576146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.585436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.586007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.586022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.586029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.586250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.586466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.586474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.586481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.589973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.599263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.599795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.599809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.599817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.600032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.600253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.600265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.600272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.603764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.613049] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.613614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.613652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.613664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.613903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.614123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.614131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.614139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.617653] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.626947] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.627542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.627561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.627569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.627785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.628001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.628008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.628015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.631516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.640804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.641479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.641517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.641528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.641764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.641984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.641992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.642000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.645516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.654596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.655148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.655185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.655205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.655442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.655661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.655670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.655678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.659173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.668470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.669035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.669071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.669082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.669329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.669550] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.669558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.669566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.182 [2024-06-07 14:40:14.673067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.182 [2024-06-07 14:40:14.682363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.182 [2024-06-07 14:40:14.682906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.182 [2024-06-07 14:40:14.682923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.182 [2024-06-07 14:40:14.682931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.182 [2024-06-07 14:40:14.683147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.182 [2024-06-07 14:40:14.683371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.182 [2024-06-07 14:40:14.683380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.182 [2024-06-07 14:40:14.683387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.686886] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.696174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.696711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.696727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.696739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.696955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.697171] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.697178] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.697185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.700684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.709963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.710505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.710522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.710529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.710746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.710961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.710969] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.710976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.714477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.723765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.724314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.724330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.724337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.724553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.724769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.724784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.724791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.728289] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.737572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.738142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.738157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.738164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.738384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.738600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.738612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.738619] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.742114] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.751441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.752016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.752032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.752040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.752260] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.752477] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.752484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.752491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.755981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.765266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.765797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.765812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.765820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.766035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.766256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.766264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.766271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.769765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.779047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.779646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.779684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.779695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.779930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.780150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.780159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.780166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.783675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.792968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.793498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.793516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.793524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.793740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.793957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.793964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.793971] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.797473] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.806765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.807302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.807341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.807354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.807593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.807812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.807821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.807828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.183 [2024-06-07 14:40:14.811338] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.183 [2024-06-07 14:40:14.820621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.183 [2024-06-07 14:40:14.821248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.183 [2024-06-07 14:40:14.821287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.183 [2024-06-07 14:40:14.821299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.183 [2024-06-07 14:40:14.821537] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.183 [2024-06-07 14:40:14.821756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.183 [2024-06-07 14:40:14.821764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.183 [2024-06-07 14:40:14.821772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.184 [2024-06-07 14:40:14.825277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.834561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.835219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.835257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.446 [2024-06-07 14:40:14.835268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.446 [2024-06-07 14:40:14.835509] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.446 [2024-06-07 14:40:14.835730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.446 [2024-06-07 14:40:14.835738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.446 [2024-06-07 14:40:14.835746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.446 [2024-06-07 14:40:14.839250] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.848331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.849000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.849037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.446 [2024-06-07 14:40:14.849048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.446 [2024-06-07 14:40:14.849291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.446 [2024-06-07 14:40:14.849512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.446 [2024-06-07 14:40:14.849521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.446 [2024-06-07 14:40:14.849529] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.446 [2024-06-07 14:40:14.853024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.862105] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.862762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.862800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.446 [2024-06-07 14:40:14.862811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.446 [2024-06-07 14:40:14.863048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.446 [2024-06-07 14:40:14.863276] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.446 [2024-06-07 14:40:14.863285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.446 [2024-06-07 14:40:14.863293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.446 [2024-06-07 14:40:14.866792] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.875867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.876377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.876395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.446 [2024-06-07 14:40:14.876404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.446 [2024-06-07 14:40:14.876620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.446 [2024-06-07 14:40:14.876836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.446 [2024-06-07 14:40:14.876844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.446 [2024-06-07 14:40:14.876855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.446 [2024-06-07 14:40:14.880348] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.889631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.890237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.890260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.446 [2024-06-07 14:40:14.890269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.446 [2024-06-07 14:40:14.890489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.446 [2024-06-07 14:40:14.890705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.446 [2024-06-07 14:40:14.890713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.446 [2024-06-07 14:40:14.890720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.446 [2024-06-07 14:40:14.894220] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.446 [2024-06-07 14:40:14.903492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.446 [2024-06-07 14:40:14.904086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.446 [2024-06-07 14:40:14.904103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.904110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.904330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.904546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.904553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.904560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.908050] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.917323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.917897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.917912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.917920] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.918135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.918355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.918363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.918370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.921859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.931143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.931816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.931858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.931869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.932105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.932333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.932343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.932350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.935847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.944962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.945525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.945543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.945551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.945768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.945984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.945992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.945999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.949495] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.958798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.959215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.959235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.959242] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.959459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.959675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.959690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.959697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.963189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.972670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.973115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.973130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.973137] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.973357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.973577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.973584] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.973591] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.977082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:14.986578] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:14.987154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:14.987169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:14.987176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:14.987397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:14.987613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:14.987620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:14.987627] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:14.991129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:15.000423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:15.001054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:15.001092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:15.001103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:15.001346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:15.001566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:15.001575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:15.001582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:15.005079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:15.014154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:15.014818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:15.014856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:15.014867] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:15.015103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:15.015329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:15.015338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:15.015346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:15.018851] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:15.027924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:15.028497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:15.028516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:15.028524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.447 [2024-06-07 14:40:15.028741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.447 [2024-06-07 14:40:15.028957] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.447 [2024-06-07 14:40:15.028965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.447 [2024-06-07 14:40:15.028972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.447 [2024-06-07 14:40:15.032471] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.447 [2024-06-07 14:40:15.041755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.447 [2024-06-07 14:40:15.042312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.447 [2024-06-07 14:40:15.042350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.447 [2024-06-07 14:40:15.042363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.448 [2024-06-07 14:40:15.042602] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.448 [2024-06-07 14:40:15.042821] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.448 [2024-06-07 14:40:15.042830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.448 [2024-06-07 14:40:15.042838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.448 [2024-06-07 14:40:15.046357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.448 [2024-06-07 14:40:15.055643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.448 [2024-06-07 14:40:15.056280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.448 [2024-06-07 14:40:15.056318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.448 [2024-06-07 14:40:15.056331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.448 [2024-06-07 14:40:15.056570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.448 [2024-06-07 14:40:15.056789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.448 [2024-06-07 14:40:15.056798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.448 [2024-06-07 14:40:15.056805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.448 [2024-06-07 14:40:15.060313] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.448 [2024-06-07 14:40:15.069387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.448 [2024-06-07 14:40:15.069817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.448 [2024-06-07 14:40:15.069837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.448 [2024-06-07 14:40:15.069849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.448 [2024-06-07 14:40:15.070066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.448 [2024-06-07 14:40:15.070288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.448 [2024-06-07 14:40:15.070296] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.448 [2024-06-07 14:40:15.070303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.448 [2024-06-07 14:40:15.073799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.448 [2024-06-07 14:40:15.083288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.448 [2024-06-07 14:40:15.083952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.448 [2024-06-07 14:40:15.083989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.448 [2024-06-07 14:40:15.084000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.448 [2024-06-07 14:40:15.084244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.448 [2024-06-07 14:40:15.084464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.448 [2024-06-07 14:40:15.084472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.448 [2024-06-07 14:40:15.084480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.448 [2024-06-07 14:40:15.087977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.097053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.097605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.097623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.097631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.097847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.098063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.098070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.098077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.101570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.110841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.111385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.111401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.111409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.111625] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.111841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.111853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.111860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.115356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.124633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.125193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.125213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.125220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.125436] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.125651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.125660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.125667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.129160] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.138440] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.139012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.139027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.139034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.139256] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.139472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.139480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.139487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.142974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.152263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.152924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.152961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.152972] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.153215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.153436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.153444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.153452] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.156947] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.166053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.166739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.166777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.166790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.167027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.167255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.167264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.167271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.170767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.179843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.180504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.180542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.180553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.180789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.181009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.181017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.711 [2024-06-07 14:40:15.181024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.711 [2024-06-07 14:40:15.184525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.711 [2024-06-07 14:40:15.193596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.711 [2024-06-07 14:40:15.194178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.711 [2024-06-07 14:40:15.194201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.711 [2024-06-07 14:40:15.194210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.711 [2024-06-07 14:40:15.194426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.711 [2024-06-07 14:40:15.194641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.711 [2024-06-07 14:40:15.194650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.194656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.198148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.207423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.208079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.208116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.208126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.208376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.208597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.208605] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.208613] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.212109] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.221180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.221855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.221893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.221904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.222140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.222369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.222378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.222386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.225882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.234955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.235635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.235673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.235684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.235920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.236140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.236148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.236156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.239664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.248748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.249437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.249475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.249486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.249722] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.249942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.249950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.249962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.253467] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.262535] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.263117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.263134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.263142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.263365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.263581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.263589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.263596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.267084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.276354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.276999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.277036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.277047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.277291] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.277513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.277521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.277528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.281027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.290095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.290749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.290787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.290798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.291034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.291264] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.291273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.291281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.294778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.303848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.304443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.304480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.304491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.304727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.304947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.304955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.304962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.308468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.317748] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.318470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.318508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.318518] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.318754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.318974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.318983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.318991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.322497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.712 [2024-06-07 14:40:15.331573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.712 [2024-06-07 14:40:15.332249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.712 [2024-06-07 14:40:15.332287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.712 [2024-06-07 14:40:15.332299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.712 [2024-06-07 14:40:15.332538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.712 [2024-06-07 14:40:15.332758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.712 [2024-06-07 14:40:15.332766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.712 [2024-06-07 14:40:15.332774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.712 [2024-06-07 14:40:15.336279] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.713 [2024-06-07 14:40:15.345348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.713 [2024-06-07 14:40:15.346026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.713 [2024-06-07 14:40:15.346064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.713 [2024-06-07 14:40:15.346074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.713 [2024-06-07 14:40:15.346334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.713 [2024-06-07 14:40:15.346556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.713 [2024-06-07 14:40:15.346564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.713 [2024-06-07 14:40:15.346572] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.713 [2024-06-07 14:40:15.350067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.974 [2024-06-07 14:40:15.359140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.974 [2024-06-07 14:40:15.359815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.974 [2024-06-07 14:40:15.359853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.974 [2024-06-07 14:40:15.359864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.974 [2024-06-07 14:40:15.360100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.974 [2024-06-07 14:40:15.360329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.974 [2024-06-07 14:40:15.360338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.974 [2024-06-07 14:40:15.360346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.974 [2024-06-07 14:40:15.363847] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.974 [2024-06-07 14:40:15.372952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.974 [2024-06-07 14:40:15.373649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.974 [2024-06-07 14:40:15.373687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.974 [2024-06-07 14:40:15.373697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.974 [2024-06-07 14:40:15.373933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.974 [2024-06-07 14:40:15.374153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.974 [2024-06-07 14:40:15.374161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.974 [2024-06-07 14:40:15.374169] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.974 [2024-06-07 14:40:15.377671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.974 [2024-06-07 14:40:15.386739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.974 [2024-06-07 14:40:15.387334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.974 [2024-06-07 14:40:15.387372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.974 [2024-06-07 14:40:15.387382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.974 [2024-06-07 14:40:15.387618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.974 [2024-06-07 14:40:15.387838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.974 [2024-06-07 14:40:15.387847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.974 [2024-06-07 14:40:15.387858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.974 [2024-06-07 14:40:15.391365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.400643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.401200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.401218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.401226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.401442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.401657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.401665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.401672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.405163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.414436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.414993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.415009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.415017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.415239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.415456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.415464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.415471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.418959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.428228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.428881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.428919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.428929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.429165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.429393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.429402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.429410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.432906] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.441984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.442528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.442550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.442558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.442776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.442991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.442999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.443006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.446512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.455792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.456430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.456468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.456478] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.456714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.456933] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.456942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.456949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.460454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.469599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.470239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.470277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.470289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.470528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.470748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.470756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.470764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.474269] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.483342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.484013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.484051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.484062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.484306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.484531] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.484540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.484547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.488044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.497117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.497768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.497806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.497817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.498052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.498281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.498291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.498298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.501793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.510869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.511514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.511552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.511563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.511799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.512018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.512027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.512034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.515537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.524610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.525300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.975 [2024-06-07 14:40:15.525338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.975 [2024-06-07 14:40:15.525350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.975 [2024-06-07 14:40:15.525587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.975 [2024-06-07 14:40:15.525807] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.975 [2024-06-07 14:40:15.525815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.975 [2024-06-07 14:40:15.525823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.975 [2024-06-07 14:40:15.529336] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.975 [2024-06-07 14:40:15.538407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.975 [2024-06-07 14:40:15.539041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.539079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.539090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.539334] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.539554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.539563] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.539570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.543065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.976 [2024-06-07 14:40:15.552152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.976 [2024-06-07 14:40:15.552836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.552873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.552884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.553120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.553354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.553363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.553370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.556866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.976 [2024-06-07 14:40:15.565934] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.976 [2024-06-07 14:40:15.566481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.566500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.566508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.566725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.566941] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.566949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.566955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.570453] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.976 [2024-06-07 14:40:15.579828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.976 [2024-06-07 14:40:15.580495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.580532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.580548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.580784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.581003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.581011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.581019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.584525] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.976 [2024-06-07 14:40:15.593602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.976 [2024-06-07 14:40:15.594289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.594326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.594339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.594578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.594797] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.594807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.594814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.598319] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:51.976 [2024-06-07 14:40:15.607392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:51.976 [2024-06-07 14:40:15.608004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.976 [2024-06-07 14:40:15.608041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:51.976 [2024-06-07 14:40:15.608052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:51.976 [2024-06-07 14:40:15.608296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:51.976 [2024-06-07 14:40:15.608517] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:51.976 [2024-06-07 14:40:15.608525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:51.976 [2024-06-07 14:40:15.608532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:51.976 [2024-06-07 14:40:15.612029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.621307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.621846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.621884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.621897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.622136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.622365] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.622379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.622387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.625882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.635163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.635843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.635880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.635891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.636127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.636358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.636367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.636374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.639872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.648952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.649652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.649690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.649701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.649936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.650156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.650164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.650172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.653677] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.662752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.663325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.663363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.663376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.663615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.663834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.663843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.663851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.667354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.676646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.677300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.677337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.677348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.677584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.677804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.677812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.677819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.681324] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.690394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.691078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.691115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.691126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.691370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.691591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.691600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.691607] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.695106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.704182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.704833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.704871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.704881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.705117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.705533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.705545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.705553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.709051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.717919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.718578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.718616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.718627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.718871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.719091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.719100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.719107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.722611] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.731682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.732227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.732246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.732254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.732470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.732687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.239 [2024-06-07 14:40:15.732694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.239 [2024-06-07 14:40:15.732701] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.239 [2024-06-07 14:40:15.736203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.239 [2024-06-07 14:40:15.745472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.239 [2024-06-07 14:40:15.745876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.239 [2024-06-07 14:40:15.745896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.239 [2024-06-07 14:40:15.745904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.239 [2024-06-07 14:40:15.746122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.239 [2024-06-07 14:40:15.746347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.746356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.746362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.749866] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.759348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.759992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.760029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.760040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.760284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.760505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.760514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.760526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.764028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.773103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.773736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.773774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.773784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.774020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.774249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.774258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.774266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.777765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.787064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.787746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.787784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.787794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.788030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.788260] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.788269] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.788276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.791772] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.800843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.801512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.801550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.801560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.801796] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.802016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.802024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.802031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.805536] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.814603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.815281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.815319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.815331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.815569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.815789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.815797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.815805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.819318] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.828385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.829054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.829091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.829101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.829346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.829567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.829575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.829583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.833079] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.842151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.842674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.842712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.842723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.842959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.843178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.843187] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.843203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.846703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.855990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.856644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.856681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.856692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.856932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.857152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.857161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.857168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.860675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.869751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.870296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.870334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.870346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.870583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.240 [2024-06-07 14:40:15.870803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.240 [2024-06-07 14:40:15.870811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.240 [2024-06-07 14:40:15.870818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.240 [2024-06-07 14:40:15.874326] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.240 [2024-06-07 14:40:15.883601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.240 [2024-06-07 14:40:15.884157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.240 [2024-06-07 14:40:15.884175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.240 [2024-06-07 14:40:15.884183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.240 [2024-06-07 14:40:15.884406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.502 [2024-06-07 14:40:15.884622] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.502 [2024-06-07 14:40:15.884633] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.502 [2024-06-07 14:40:15.884640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.502 [2024-06-07 14:40:15.888132] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.502 [2024-06-07 14:40:15.897407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.502 [2024-06-07 14:40:15.897974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.502 [2024-06-07 14:40:15.897990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.502 [2024-06-07 14:40:15.897997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.502 [2024-06-07 14:40:15.898255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.502 [2024-06-07 14:40:15.898473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.502 [2024-06-07 14:40:15.898480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.502 [2024-06-07 14:40:15.898491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.502 [2024-06-07 14:40:15.901984] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.502 [2024-06-07 14:40:15.911257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.502 [2024-06-07 14:40:15.911791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.502 [2024-06-07 14:40:15.911829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.502 [2024-06-07 14:40:15.911841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.502 [2024-06-07 14:40:15.912078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.912307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.912316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.912323] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.915823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.925101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.925778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.925816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.925827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.926063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.926291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.926300] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.926307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.929805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.938884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.939534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.939571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.939582] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.939818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.940037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.940046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.940053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.943559] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.952637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.953216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.953238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.953246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.953464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.953679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.953687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.953694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.957188] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.966466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.967110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.967148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.967160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.967407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.967627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.967636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.967643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.971145] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.980225] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.980879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.980917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.980929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.981166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.981396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.981405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.981413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.984910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:15.993982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:15.994644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:15.994682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:15.994693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:15.994929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:15.995153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:15.995162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:15.995170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:15.998671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:16.007741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:16.008405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:16.008443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:16.008456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:16.008695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:16.008915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:16.008923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:16.008931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:16.012434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:16.021505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:16.022179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:16.022224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:16.022237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:16.022474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:16.022693] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:16.022701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:16.022709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:16.026210] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:16.035284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:16.035794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:16.035831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:16.035842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:16.036078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:16.036305] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:16.036314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.503 [2024-06-07 14:40:16.036322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.503 [2024-06-07 14:40:16.039825] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.503 [2024-06-07 14:40:16.049114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.503 [2024-06-07 14:40:16.049767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.503 [2024-06-07 14:40:16.049805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.503 [2024-06-07 14:40:16.049816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.503 [2024-06-07 14:40:16.050052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.503 [2024-06-07 14:40:16.050277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.503 [2024-06-07 14:40:16.050286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.050294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.053789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.062865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.063410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.063429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.063437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.063654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.063870] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.063877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.063884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.067380] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.076655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.077239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.077262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.077270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.077490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.077707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.077715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.077722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.081221] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.090493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.091156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.091202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.091219] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.091458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.091678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.091686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.091693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.095192] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.104272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.104776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.104813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.104824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.105059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.105288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.105298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.105305] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.108802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.118078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.118647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.118684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.118695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.118931] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.119150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.119159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.119166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.122670] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.131950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.132613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.132650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.132661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.132897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.133116] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.133129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.133137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.504 [2024-06-07 14:40:16.136643] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.504 [2024-06-07 14:40:16.145716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.504 [2024-06-07 14:40:16.146304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.504 [2024-06-07 14:40:16.146341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.504 [2024-06-07 14:40:16.146354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.504 [2024-06-07 14:40:16.146592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.504 [2024-06-07 14:40:16.146811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.504 [2024-06-07 14:40:16.146820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.504 [2024-06-07 14:40:16.146828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.765 [2024-06-07 14:40:16.150342] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.765 [2024-06-07 14:40:16.159618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.160291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.160329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.160340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.160576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.160795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.160804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.160811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.164317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.173388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.174044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.174081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.174092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.174337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.174558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.174567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.174574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.178071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.187157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.187769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.187806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.187817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.188053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.188279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.188288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.188296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.191793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.201068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.201662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.201681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.201688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.201905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.202121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.202128] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.202135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.205659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.214936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.215479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.215496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.215504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.215721] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.215936] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.215944] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.215950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.219487] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.228765] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.229335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.229374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.229386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.229629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.229849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.229858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.229865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.233370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.242653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.243277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.243315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.243328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.243565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.243785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.243795] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.243803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.247317] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.256397] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.257070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.257107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.257118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.257361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.257583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.257591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.257599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.261096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.270173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.270788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.270826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.270837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.271073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.271300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.271309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.271321] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.766 [2024-06-07 14:40:16.274818] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.766 [2024-06-07 14:40:16.284129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.766 [2024-06-07 14:40:16.284764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.766 [2024-06-07 14:40:16.284802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.766 [2024-06-07 14:40:16.284814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.766 [2024-06-07 14:40:16.285052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.766 [2024-06-07 14:40:16.285279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.766 [2024-06-07 14:40:16.285289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.766 [2024-06-07 14:40:16.285296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.288805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.297881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.298248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.298266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.298274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.298491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.298707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.298716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.298722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.302223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.311704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.312334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.312371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.312383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.312620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.312840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.312849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.312856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.316364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.325436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.326104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.326142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.326154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.326400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.326621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.326629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.326637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.330134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.339211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.339923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.339961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.339971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.340215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.340435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.340444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.340451] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.343948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.353036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.353684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.353722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.353734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.353973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.354200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.354209] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.354217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.357712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.366789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.367510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.367547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.367558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.367794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.368018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.368027] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.368034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.371541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.380618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.381167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.381185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.381199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.381416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.381632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.381640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.381647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.385141] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 [2024-06-07 14:40:16.394425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.394956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.394972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.394979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.395200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.395416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.767 [2024-06-07 14:40:16.395425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.767 [2024-06-07 14:40:16.395431] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.767 [2024-06-07 14:40:16.398920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:52.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 813807 Killed "${NVMF_APP[@]}" "$@" 00:37:52.767 14:40:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:52.767 14:40:16 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:52.767 14:40:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:52.767 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:52.767 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.767 [2024-06-07 14:40:16.408191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:52.767 [2024-06-07 14:40:16.408745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.767 [2024-06-07 14:40:16.408760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:52.767 [2024-06-07 14:40:16.408767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:52.767 [2024-06-07 14:40:16.408990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:52.767 [2024-06-07 14:40:16.409213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:52.768 [2024-06-07 14:40:16.409221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:52.768 [2024-06-07 14:40:16.409228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:52.768 14:40:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=815269 00:37:52.768 14:40:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 815269 00:37:52.768 14:40:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:52.768 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 815269 ']' 00:37:52.768 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.030 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:53.030 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.030 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:53.030 14:40:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.030 [2024-06-07 14:40:16.412748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.422036] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.422591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.422607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.422615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.422830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.423047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.423055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.423062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.426560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.435838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.436504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.436542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.436554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.436790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.437010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.437019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.437027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.440537] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.449637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.450191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.450215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.450223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.450439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.450654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.450663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.450670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.454162] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.460824] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:37:53.030 [2024-06-07 14:40:16.460870] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:53.030 [2024-06-07 14:40:16.463438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.464012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.464027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.464035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.464257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.464473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.464482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.464489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.467979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.477257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.477932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.477970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.477981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.478225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.478446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.478455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.478462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.481960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.491050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.491701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.491739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.491750] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.491986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.492215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.492224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.492232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.495730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 EAL: No free 2048 kB hugepages reported on node 1 00:37:53.030 [2024-06-07 14:40:16.504895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.505587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.505625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.505636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.505872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.506091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.506100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.506108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.509613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.518688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.519281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.519299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.519308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.519524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.519741] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.519748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.519755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.523251] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.532527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.533101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.533117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.533128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.533350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.533566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.533575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.533582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.537071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.546391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.546833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.546849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.546856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.030 [2024-06-07 14:40:16.547072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.030 [2024-06-07 14:40:16.547302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.030 [2024-06-07 14:40:16.547310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.030 [2024-06-07 14:40:16.547317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.030 [2024-06-07 14:40:16.547866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:53.030 [2024-06-07 14:40:16.550813] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.030 [2024-06-07 14:40:16.560311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.030 [2024-06-07 14:40:16.560856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.030 [2024-06-07 14:40:16.560873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.030 [2024-06-07 14:40:16.560881] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.561098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.561320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.561329] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.561336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.564832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.574121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.574672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.574690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.574697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.574915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.575131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.575142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.575151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.576019] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:53.031 [2024-06-07 14:40:16.576046] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:53.031 [2024-06-07 14:40:16.576052] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:53.031 [2024-06-07 14:40:16.576057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:53.031 [2024-06-07 14:40:16.576061] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:53.031 [2024-06-07 14:40:16.576204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.031 [2024-06-07 14:40:16.576400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:53.031 [2024-06-07 14:40:16.576493] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.031 [2024-06-07 14:40:16.578650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.587939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.588635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.588676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.588688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.588929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.589150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.589159] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.589167] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.592672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.601761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.602352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.602381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.602390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.602616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.602835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.602843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.602851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.606352] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.615628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.616099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.616117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.616130] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.616352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.616570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.616578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.616585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.620127] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.629411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.630064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.630103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.630114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.630360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.630581] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.630590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.630598] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.634094] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.643164] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.643717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.643755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.643768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.644006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.644234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.644244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.644251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.647759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.657043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.657716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.657754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.657765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.658001] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.658229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.658243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.658251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.661748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.031 [2024-06-07 14:40:16.670824] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.031 [2024-06-07 14:40:16.671520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.031 [2024-06-07 14:40:16.671558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.031 [2024-06-07 14:40:16.671569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.031 [2024-06-07 14:40:16.671805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.031 [2024-06-07 14:40:16.672025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.031 [2024-06-07 14:40:16.672034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.031 [2024-06-07 14:40:16.672042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.031 [2024-06-07 14:40:16.675547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.684620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.685291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.685329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.685341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.685580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.685800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.685809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.685818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.689323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.698419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.698901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.698919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.698928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.699144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.699367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.699376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.699383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.702872] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.712376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.712933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.712950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.712958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.713174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.713396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.713405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.713412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.716903] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.726177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.726847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.726886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.726897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.727133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.727361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.727370] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.727378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.730879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.739957] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.740636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.740674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.740685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.740921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.741140] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.741149] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.741157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.744660] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.292 [2024-06-07 14:40:16.753754] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.292 [2024-06-07 14:40:16.754305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.292 [2024-06-07 14:40:16.754343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.292 [2024-06-07 14:40:16.754354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.292 [2024-06-07 14:40:16.754594] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.292 [2024-06-07 14:40:16.754814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.292 [2024-06-07 14:40:16.754822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.292 [2024-06-07 14:40:16.754830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.292 [2024-06-07 14:40:16.758333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.767611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.767909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.767926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.767933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.768150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.768371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.768379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.768387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.771878] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.781360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.782034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.782071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.782082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.782325] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.782546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.782555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.782563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.786056] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.795135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.795735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.795754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.795762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.795978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.796199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.796207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.796218] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.799713] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.808996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.809528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.809566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.809577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.809813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.810033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.810042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.810049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.813552] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.822832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.823430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.823450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.823458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.823675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.823891] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.823898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.823905] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.827432] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.836720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.837147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.837163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.837171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.837392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.837608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.837617] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.837624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.841112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.850604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.851026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.851041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.851049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.851270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.851486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.851494] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.851500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.854989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.864471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.865010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.865025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.865032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.865254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.865470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.865478] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.865485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.868973] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.878250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.878794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.878809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.878816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.879031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.879253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.879261] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.879268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.882755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.892031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.892736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.293 [2024-06-07 14:40:16.892774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.293 [2024-06-07 14:40:16.892785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.293 [2024-06-07 14:40:16.893025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.293 [2024-06-07 14:40:16.893252] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.293 [2024-06-07 14:40:16.893262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.293 [2024-06-07 14:40:16.893269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.293 [2024-06-07 14:40:16.896766] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.293 [2024-06-07 14:40:16.905842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.293 [2024-06-07 14:40:16.906410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.294 [2024-06-07 14:40:16.906429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.294 [2024-06-07 14:40:16.906437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.294 [2024-06-07 14:40:16.906653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.294 [2024-06-07 14:40:16.906869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.294 [2024-06-07 14:40:16.906877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.294 [2024-06-07 14:40:16.906884] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.294 [2024-06-07 14:40:16.910386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.294 [2024-06-07 14:40:16.919672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.294 [2024-06-07 14:40:16.920296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.294 [2024-06-07 14:40:16.920334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.294 [2024-06-07 14:40:16.920346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.294 [2024-06-07 14:40:16.920585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.294 [2024-06-07 14:40:16.920806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.294 [2024-06-07 14:40:16.920815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.294 [2024-06-07 14:40:16.920822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.294 [2024-06-07 14:40:16.924327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.294 [2024-06-07 14:40:16.933608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.294 [2024-06-07 14:40:16.934297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.294 [2024-06-07 14:40:16.934335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.294 [2024-06-07 14:40:16.934347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.294 [2024-06-07 14:40:16.934583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.294 [2024-06-07 14:40:16.934803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.294 [2024-06-07 14:40:16.934811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.294 [2024-06-07 14:40:16.934823] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.294 [2024-06-07 14:40:16.938325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:16.947415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:16.947974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:16.947992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:16.947999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:16.948221] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:16.948438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:16.948446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:16.948453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:16.951942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:16.961220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:16.961796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:16.961812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:16.961819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:16.962035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:16.962257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:16.962265] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:16.962272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:16.965762] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:16.975034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:16.975727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:16.975765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:16.975776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:16.976012] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:16.976240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:16.976249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:16.976256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:16.979750] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:16.988862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:16.989364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:16.989406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:16.989418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:16.989657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:16.989877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:16.989886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:16.989893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:16.993404] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:17.002677] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:17.003246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:17.003271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:17.003280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:17.003501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:17.003718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:17.003727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:17.003734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:17.007230] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:17.016502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:17.017024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:17.017062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:17.017073] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.555 [2024-06-07 14:40:17.017316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.555 [2024-06-07 14:40:17.017536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.555 [2024-06-07 14:40:17.017544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.555 [2024-06-07 14:40:17.017552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.555 [2024-06-07 14:40:17.021049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.555 [2024-06-07 14:40:17.030323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.555 [2024-06-07 14:40:17.030870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.555 [2024-06-07 14:40:17.030887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.555 [2024-06-07 14:40:17.030895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.031112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.031338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.031346] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.031353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.034871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.044149] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.044829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.044867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.044877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.045114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.045341] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.045350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.045357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.048864] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.057935] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.058632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.058670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.058681] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.058917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.059137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.059146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.059153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.062656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.071736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.072446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.072484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.072495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.072730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.072950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.072959] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.072967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.076481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.085555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.086269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.086307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.086319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.086558] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.086777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.086786] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.086794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.090297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.099369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.099939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.099956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.099965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.100181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.100404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.100413] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.100420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.103911] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.113186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.113852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.113891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.113902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.114138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.114366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.114375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.114383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.117879] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.126952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.127624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.127662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.127678] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.127914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.128134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.128143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.128151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.131655] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.140731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.141334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.141371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.141383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.141622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.141841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.141850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.141858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.145366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.154662] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.155304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.155342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.556 [2024-06-07 14:40:17.155356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.556 [2024-06-07 14:40:17.155595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.556 [2024-06-07 14:40:17.155815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.556 [2024-06-07 14:40:17.155824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.556 [2024-06-07 14:40:17.155831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.556 [2024-06-07 14:40:17.159337] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.556 [2024-06-07 14:40:17.168413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.556 [2024-06-07 14:40:17.169098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.556 [2024-06-07 14:40:17.169136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.557 [2024-06-07 14:40:17.169148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.557 [2024-06-07 14:40:17.169397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.557 [2024-06-07 14:40:17.169618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.557 [2024-06-07 14:40:17.169631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.557 [2024-06-07 14:40:17.169638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.557 [2024-06-07 14:40:17.173138] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.557 [2024-06-07 14:40:17.182215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.557 [2024-06-07 14:40:17.182888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.557 [2024-06-07 14:40:17.182926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.557 [2024-06-07 14:40:17.182937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.557 [2024-06-07 14:40:17.183174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.557 [2024-06-07 14:40:17.183402] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.557 [2024-06-07 14:40:17.183411] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.557 [2024-06-07 14:40:17.183419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.557 [2024-06-07 14:40:17.186914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.557 [2024-06-07 14:40:17.195988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.557 [2024-06-07 14:40:17.196571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.557 [2024-06-07 14:40:17.196590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.557 [2024-06-07 14:40:17.196598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.557 [2024-06-07 14:40:17.196815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.557 [2024-06-07 14:40:17.197031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.557 [2024-06-07 14:40:17.197039] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.557 [2024-06-07 14:40:17.197046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.557 [2024-06-07 14:40:17.200544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 [2024-06-07 14:40:17.209817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.210377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.210394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.210401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.210617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.210833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.210841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.210848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.214341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 [2024-06-07 14:40:17.223620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.224111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.224126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.224133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.224353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.224569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.224577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.224583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.228071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.818 [2024-06-07 14:40:17.237551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.238146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.238162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.238169] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.238389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.238605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.238613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.238620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.242110] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 [2024-06-07 14:40:17.251424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.251963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.251979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.251986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.252207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.252423] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.252431] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.252438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.255927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 [2024-06-07 14:40:17.265201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.265786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.265801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.265809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.266024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.266244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.266252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.266259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.269748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.818 [2024-06-07 14:40:17.276837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.818 [2024-06-07 14:40:17.279021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.279686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.279724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.279735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.279972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.280192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.280208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.280216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.818 [2024-06-07 14:40:17.283714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 [2024-06-07 14:40:17.292784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 [2024-06-07 14:40:17.293518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.293556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.293567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.293803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 [2024-06-07 14:40:17.294023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.294036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.294043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 [2024-06-07 14:40:17.297549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 Malloc0 00:37:53.818 [2024-06-07 14:40:17.306622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.818 [2024-06-07 14:40:17.307333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.818 [2024-06-07 14:40:17.307371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.818 [2024-06-07 14:40:17.307383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.818 [2024-06-07 14:40:17.307622] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:53.818 [2024-06-07 14:40:17.307843] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.818 [2024-06-07 14:40:17.307852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.818 [2024-06-07 14:40:17.307860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.818 [2024-06-07 14:40:17.311364] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.818 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.819 [2024-06-07 14:40:17.320436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.819 [2024-06-07 14:40:17.321128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.819 [2024-06-07 14:40:17.321166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.819 [2024-06-07 14:40:17.321177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.819 [2024-06-07 14:40:17.321424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.819 [2024-06-07 14:40:17.321645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.819 [2024-06-07 14:40:17.321654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.819 [2024-06-07 14:40:17.321662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.819 [2024-06-07 14:40:17.325156] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.819 [2024-06-07 14:40:17.334227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:53.819 [2024-06-07 14:40:17.334900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.819 [2024-06-07 14:40:17.334938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1030d80 with addr=10.0.0.2, port=4420 00:37:53.819 [2024-06-07 14:40:17.334949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1030d80 is same with the state(5) to be set 00:37:53.819 [2024-06-07 14:40:17.335185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1030d80 (9): Bad file descriptor 00:37:53.819 [2024-06-07 14:40:17.335411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:37:53.819 [2024-06-07 14:40:17.335420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:37:53.819 [2024-06-07 14:40:17.335428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:37:53.819 [2024-06-07 14:40:17.338246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.819 [2024-06-07 14:40:17.338927] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:53.819 14:40:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 814174 00:37:53.819 [2024-06-07 14:40:17.348004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:37:54.078 [2024-06-07 14:40:17.518910] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:38:02.213 00:38:02.213 Latency(us) 00:38:02.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.213 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:02.213 Verification LBA range: start 0x0 length 0x4000 00:38:02.213 Nvme1n1 : 15.01 8316.43 32.49 10182.23 0.00 6893.81 788.48 15947.09 00:38:02.213 =================================================================================================================== 00:38:02.213 Total : 8316.43 32.49 10182.23 0.00 6893.81 788.48 15947.09 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:02.474 14:40:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:02.474 rmmod nvme_tcp 00:38:02.474 rmmod nvme_fabrics 00:38:02.474 rmmod nvme_keyring 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 815269 ']' 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 815269 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 815269 ']' 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 815269 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 815269 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 815269' 00:38:02.474 killing process with pid 815269 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 815269 00:38:02.474 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 815269 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:02.735 14:40:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.648 14:40:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:04.648 00:38:04.648 real 0m28.527s 00:38:04.648 user 1m2.875s 00:38:04.648 sys 0m7.725s 00:38:04.648 14:40:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:04.648 14:40:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:04.648 ************************************ 00:38:04.648 END TEST nvmf_bdevperf 00:38:04.648 ************************************ 00:38:04.909 14:40:28 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:04.909 14:40:28 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:04.909 14:40:28 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:04.909 14:40:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:04.909 ************************************ 00:38:04.909 START TEST nvmf_target_disconnect 00:38:04.909 ************************************ 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:04.909 * Looking for test storage... 00:38:04.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:04.909 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:38:04.910 14:40:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:13.070 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:13.070 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:13.070 Found net devices under 0000:31:00.0: cvl_0_0 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:13.070 Found net devices under 0000:31:00.1: cvl_0_1 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.070 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:13.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:38:13.071 00:38:13.071 --- 10.0.0.2 ping statistics --- 00:38:13.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.071 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:38:13.071 00:38:13.071 --- 10.0.0.1 ping statistics --- 00:38:13.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.071 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:13.071 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:13.332 ************************************ 00:38:13.332 START TEST nvmf_target_disconnect_tc1 00:38:13.332 ************************************ 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:13.332 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.332 [2024-06-07 14:40:36.837308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:13.332 [2024-06-07 14:40:36.837362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9cc640 with addr=10.0.0.2, port=4420 00:38:13.332 [2024-06-07 14:40:36.837389] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:13.332 [2024-06-07 14:40:36.837404] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:13.332 [2024-06-07 14:40:36.837411] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:38:13.332 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:13.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:13.332 Initializing NVMe Controllers 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:13.332 00:38:13.332 real 0m0.111s 00:38:13.332 user 0m0.043s 00:38:13.332 sys 0m0.068s 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:13.332 ************************************ 00:38:13.332 END TEST nvmf_target_disconnect_tc1 00:38:13.332 ************************************ 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:13.332 ************************************ 00:38:13.332 START TEST nvmf_target_disconnect_tc2 00:38:13.332 ************************************ 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:13.332 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=821841 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 821841 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 821841 ']' 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:13.333 14:40:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:13.593 [2024-06-07 14:40:36.978969] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:38:13.593 [2024-06-07 14:40:36.979031] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.593 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.593 [2024-06-07 14:40:37.074502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:13.593 [2024-06-07 14:40:37.123551] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.593 [2024-06-07 14:40:37.123604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.593 [2024-06-07 14:40:37.123612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.593 [2024-06-07 14:40:37.123619] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.593 [2024-06-07 14:40:37.123625] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.593 [2024-06-07 14:40:37.124300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:38:13.593 [2024-06-07 14:40:37.124556] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:38:13.593 [2024-06-07 14:40:37.124787] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:38:13.593 [2024-06-07 14:40:37.124790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.165 Malloc0 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.165 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.165 [2024-06-07 14:40:37.811057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.425 [2024-06-07 14:40:37.851308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.425 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=821943 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:14.426 14:40:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:14.426 EAL: No free 2048 kB hugepages reported on node 1 00:38:16.345 14:40:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 821841 00:38:16.345 14:40:39 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Write completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.345 Read completed with error (sct=0, sc=8) 00:38:16.345 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 [2024-06-07 14:40:39.883908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:16.346 [2024-06-07 14:40:39.884172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.884189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.884673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.884711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.885008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.885027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.885212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.885223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.885689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.885726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.886029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.886042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.886485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.886523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.886889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.886902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.887432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.887469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.887798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.887810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.888139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.888149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.888473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.888510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.888887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.888899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.889131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.889142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.889349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.889359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.889654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.889664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.889933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.889944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.890227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.890238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.890637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.890646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.890960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.890970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 [2024-06-07 14:40:39.891147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.346 [2024-06-07 14:40:39.891157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.346 qpair failed and we were unable to recover it. 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Write completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.346 Read completed with error (sct=0, sc=8) 00:38:16.346 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 [2024-06-07 14:40:39.891425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Read completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 Write completed with error (sct=0, sc=8) 00:38:16.347 starting I/O failed 00:38:16.347 [2024-06-07 14:40:39.891669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:16.347 [2024-06-07 14:40:39.892015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.892026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.892364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.892374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.892591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.892601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.892874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.892884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.893110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.893120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.893462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.893472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.893779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.893789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.894123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.894133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.894344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.894357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.894642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.894654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.894998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.895008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.895244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.895255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.895630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.895640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.895941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.895951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.896160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.896171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.896515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.896525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.896747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.896757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.897019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.897029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.347 qpair failed and we were unable to recover it. 00:38:16.347 [2024-06-07 14:40:39.897380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.347 [2024-06-07 14:40:39.897390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.897677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.897687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.897992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.898003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.898264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.898273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.898581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.898590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.898923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.898931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.899251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.899261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.899562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.899571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.899876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.899885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.900207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.900216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.900519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.900529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.900844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.900853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.901052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.901062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.901481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.901490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.901780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.901790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.901976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.901986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.902388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.902398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.902700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.902711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.903034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.903044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.903372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.903382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.903612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.903622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.903961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.903970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.904330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.904340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.904670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.904680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.904852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.904862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.905257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.905267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.905569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.905578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.905798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.905807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.906118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.906129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.906426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.906442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.906765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.906775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.907112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.907122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.907460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.907471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.907683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.907694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.907989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.907999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.908393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.348 [2024-06-07 14:40:39.908403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.348 qpair failed and we were unable to recover it. 00:38:16.348 [2024-06-07 14:40:39.908739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.908748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.909137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.909146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.909467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.909477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.909798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.909808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.910139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.910148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.910477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.910487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.910751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.910760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.911098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.911108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.911286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.911296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.911616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.911625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.911911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.911920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.912294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.912304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.912625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.912634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.912817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.912828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.913081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.913091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.913380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.913389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.913693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.913703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.914011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.914021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.914260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.914273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.914497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.914506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.914812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.914823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.915203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.915213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.915530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.915540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.915901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.915911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.916208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.916219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.916642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.916652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.916951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.916960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.917249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.917259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.917581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.917591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.917906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.917915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.918225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.918235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.918559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.918568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.918860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.918870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.919170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.349 [2024-06-07 14:40:39.919179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.349 qpair failed and we were unable to recover it. 00:38:16.349 [2024-06-07 14:40:39.919486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.919496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.919815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.919824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.920005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.920014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.920222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.920233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.921419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.921443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.921786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.921797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.922134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.922143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.922470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.922481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.922747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.922757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.923051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.923061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.923348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.923358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.923550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.923559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.923868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.923877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.924177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.924187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.924450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.924460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.924792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.924805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.925150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.925160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.925535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.925546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.925835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.925845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.926127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.926137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.926469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.926480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.926777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.926787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.927102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.927111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.927480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.927490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.927771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.927781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.928169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.928179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.928499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.928510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.928816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.928825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.929048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.929057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.929278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.929288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.929579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.350 [2024-06-07 14:40:39.929588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.350 qpair failed and we were unable to recover it. 00:38:16.350 [2024-06-07 14:40:39.929942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.929951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.930238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.930253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.930578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.930587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.930893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.930903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.931093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.931102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.931396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.931405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.931706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.931715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.932003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.932013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.932357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.932367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.932679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.932688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.933004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.933013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.933316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.933329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.933659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.933668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.933987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.933997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.934313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.934323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.934620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.934629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.934949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.934958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.935167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.935176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.935453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.935463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.935861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.935870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.936171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.936181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.936376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.936385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.936653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.936662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.936875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.936883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.937175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.937186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.937956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.937976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.938304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.938315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.938623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.938633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.938948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.938958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.351 [2024-06-07 14:40:39.939302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.351 [2024-06-07 14:40:39.939312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.351 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.939617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.939626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.939943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.939952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.940337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.940347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.940618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.940627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.940931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.940941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.941270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.941279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.941491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.941500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.941829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.941838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.942130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.942140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.942465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.942475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.942660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.942669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.942878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.942887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.943228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.943238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.943454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.943463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.943780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.943789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.944128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.944138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.944370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.944379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.944738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.944747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.945147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.945156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.945486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.945496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.945691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.945702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.945920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.945929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.946130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.946140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.946511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.946521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.946827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.946836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.947145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.947154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.947451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.947460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.947749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.947758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.948079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.948088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.948423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.948433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.948765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.352 [2024-06-07 14:40:39.948775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.352 qpair failed and we were unable to recover it. 00:38:16.352 [2024-06-07 14:40:39.949110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.949120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.949438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.949448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.949871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.949881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.950193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.950211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.950577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.950586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.950909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.950919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.951134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.951143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.951456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.951466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.951777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.951786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.952100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.952109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.952423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.952433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.952746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.952755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.953037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.953046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.953173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.953182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.953493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.953502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.953789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.953798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.954102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.954112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.954303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.954313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.954684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.954694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.955004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.955013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.955301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.955311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.955478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.955487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.955859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.955868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.956181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.956190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.956538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.956547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.956807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.956816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.957196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.957206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.957504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.957513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.957822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.957832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.958123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.958133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.958446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.958455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.958769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.958779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.959095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.959105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.959446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.959456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.353 [2024-06-07 14:40:39.959794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.353 [2024-06-07 14:40:39.959803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.353 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.960027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.960036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.960367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.960376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.960578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.960587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.960890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.960899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.961134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.961143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.961486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.961496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.961819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.961829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.962144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.962153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.962348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.962358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.962666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.962676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.962988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.962999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.963318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.963336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.963659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.963668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.963963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.963973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.964151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.964161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.964489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.964499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.964855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.964864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.965173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.965183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.965453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.965462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.965688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.965696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.965817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.965826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.966296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.966386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.966762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.966797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.967151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.967179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.967530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.967539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.967867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.967876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.968141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.968150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.968544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.968553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.968884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.968894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.969297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.969307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.969687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.969696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.354 [2024-06-07 14:40:39.970008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.354 [2024-06-07 14:40:39.970017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.354 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.970336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.970345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.970724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.970733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.971036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.971044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.971369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.971379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.971774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.971784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.972080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.972092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.972378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.972387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.972736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.972746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.973054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.973064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.973371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.973380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.973589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.973597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.973938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.973948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.974242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.974252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.974579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.974588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.974971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.974980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.975315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.975325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.975645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.975654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.975873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.975882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.976222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.976231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.976521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.976531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.976839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.976849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.977180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.977190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.977538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.977548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.977839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.977849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.978155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.978165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.978356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.978367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.978674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.978684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.979017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.979027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.979354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.979364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.979749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.979758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.980076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.980085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.980370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.980380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.980674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.980683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.980896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.980905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.981233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.355 [2024-06-07 14:40:39.981243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.355 qpair failed and we were unable to recover it. 00:38:16.355 [2024-06-07 14:40:39.981547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.981556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.981887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.981896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.982215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.982225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.982536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.982545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.982857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.982875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.983189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.983200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.983521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.983530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.983841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.983850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.984158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.984168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.984472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.984482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.984784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.984793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.985110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.985119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.985501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.985510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.985819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.985828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.986164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.986174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.986413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.986423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.986728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.986737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.987071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.987081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.356 [2024-06-07 14:40:39.987438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.356 [2024-06-07 14:40:39.987447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.356 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.987831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.987841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.988040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.988049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.988402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.988412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.988710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.988719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.989043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.989052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.989362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.989371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.989708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.989717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.990037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.990054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.990375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.990384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.990563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.990573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.990799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.990809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.991143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.991152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.991479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.991489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.991816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.991825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.992203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.992212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.992392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.992402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.992803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.992812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.993153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.993162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.993460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.993469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.993784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.631 [2024-06-07 14:40:39.993796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.631 qpair failed and we were unable to recover it. 00:38:16.631 [2024-06-07 14:40:39.994091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.994100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.994382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.994392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.994701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.994710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.995098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.995107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.995333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.995342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.995643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.995652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.995942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.995951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.996265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.996274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.996589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.996598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.996914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.996923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.997246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.997255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.997582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.997591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.998004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.998013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.998360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.998369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.998681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.998690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.998983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.998993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.999301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.999310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.999636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.999645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:39.999979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:39.999989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.000318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.000328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.000612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.000621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.000922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.000931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.001333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.001343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.001546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.001555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.001773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.001783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.002093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.002103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.002404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.002417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.003045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.003055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.003340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.003350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.003670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.003680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.004015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.004025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.004235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.004244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.004578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.004589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.004972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.004982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.005324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.005334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.005638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.632 [2024-06-07 14:40:40.005647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.632 qpair failed and we were unable to recover it. 00:38:16.632 [2024-06-07 14:40:40.005958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.005967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.006180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.006189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.006551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.006561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.006871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.006882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.007262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.007272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.007571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.007581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.007807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.007817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.008108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.008117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.008457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.008475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.008794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.008803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.009027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.009036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.009225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.009237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.009567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.009577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.009883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.009893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.010214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.010224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.010457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.010466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.010670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.010680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.010889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.010900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.011143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.011152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.011448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.011458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.011660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.011669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.011945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.011954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.012124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.012135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.012437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.012448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.013201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.013222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.013570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.013581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.013893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.013904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.014216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.014226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.014415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.014425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.014759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.014769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.015056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.015065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.015254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.015267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.015589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.015599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.015736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.015745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.016045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.016055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.016286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.016296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.016423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.016432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.633 [2024-06-07 14:40:40.016641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.633 [2024-06-07 14:40:40.016650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.633 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.016987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.016996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.017088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.017097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.017390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.017400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.017746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.017756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.017848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.017857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.017958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.017968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.018174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.018183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.018276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.018286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.018658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.018668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.019049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.019059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.019251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.019261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.019671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.019680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.019917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.019926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.020108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.020118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.020477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.020487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.020818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.020828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.021159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.021169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.021497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.021507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.021808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.021818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.022097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.022107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.022372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.022390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.022707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.022716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.022953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.022963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.023286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.023296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.023635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.023644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.023960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.023969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.024271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.024282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.024593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.024603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.024998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.025007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.025343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.025353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.025728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.025738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.026020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.026029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.026230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.026241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.026555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.026565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.026881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.026899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.027026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.027037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.634 [2024-06-07 14:40:40.027202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.634 [2024-06-07 14:40:40.027213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.634 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.027440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.027449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.027749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.027759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.028075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.028084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.028279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.028289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.028491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.028500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.028720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.028729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.029007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.029016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.029188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.029203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.029533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.029542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.029878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.029888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.030202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.030214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.030532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.030541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.030808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.030817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.031201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.031211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.031524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.031533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.031847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.031857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.032170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.032179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.032551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.032561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.032876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.032886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.033259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.033269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.033570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.033579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.033906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.033915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.034217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.034227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.034546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.034555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.034777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.034786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.034980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.034989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.035308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.035317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.035541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.035551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.035852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.035861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.036140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.036149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.036425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.036435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.036758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.036768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.037090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.037100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.037307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.037316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.037663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.635 [2024-06-07 14:40:40.037673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.635 qpair failed and we were unable to recover it. 00:38:16.635 [2024-06-07 14:40:40.037955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.037964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.038289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.038299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.038611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.038622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.038838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.038847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.039173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.039183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.039482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.039493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.039830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.039839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.040020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.040030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.040347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.040356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.040637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.040647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.040990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.040999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.041391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.041400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.041736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.041749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.042065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.042076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.042439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.042449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.042776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.042785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.043076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.043085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.043463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.043472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.043808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.043817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.044143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.636 [2024-06-07 14:40:40.044152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.636 qpair failed and we were unable to recover it. 00:38:16.636 [2024-06-07 14:40:40.044495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.044505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.044910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.044919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.045249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.045259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.045577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.045586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.045902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.045912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.046225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.046234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.046529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.046539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.046894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.046903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.047170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.047179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.047489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.047498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.047821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.047831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.048170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.048179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.048378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.048389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.048589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.048598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.048950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.048960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.049300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.049310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.049572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.049581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.049895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.049904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.050243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.050291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.050542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.050551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.050769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.050778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.051097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.051106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.051399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.051408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.051726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.051736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.052042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.052051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.052359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.052368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.052543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.637 [2024-06-07 14:40:40.052552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.637 qpair failed and we were unable to recover it. 00:38:16.637 [2024-06-07 14:40:40.052912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.052922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.053201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.053210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.053525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.053534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.053866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.053876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.054192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.054206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.054525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.054534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.054873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.054882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.055198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.055207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.055586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.055595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.055980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.055989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.056173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.056182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.056545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.056555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.056932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.056942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.057412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.057449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.057791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.057803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.058110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.058119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.058303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.058313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.058636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.058646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.058987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.058997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.059315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.059324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.059718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.059727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.060082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.060100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.060297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.060307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.060509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.060522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.060862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.060872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.061191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.061212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.061414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.638 [2024-06-07 14:40:40.061424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.638 qpair failed and we were unable to recover it. 00:38:16.638 [2024-06-07 14:40:40.061753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.061763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.062045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.062055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.062317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.062326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.062679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.062688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.063017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.063027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.063206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.063217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.063518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.063527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.063859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.063867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.064190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.064203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.064540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.064549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.064866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.064876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.065251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.065261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.065552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.065560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.065886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.065895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.066205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.066215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.066385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.066395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.066711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.066721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.067044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.067053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.067356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.067366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.067694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.067703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.068011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.068020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.068350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.068359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.068677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.068687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.069009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.069022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.069336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.069346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.069668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.069677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.069998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.070008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.070322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.070332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.070656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.070667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.071000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.071010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.071350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.071360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.071671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.071680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.072000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.072010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.072333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.072342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.072663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.072674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.072976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.072986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.073271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.073280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.073395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.639 [2024-06-07 14:40:40.073404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.639 qpair failed and we were unable to recover it. 00:38:16.639 [2024-06-07 14:40:40.073711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.073721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.074073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.074084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.074408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.074419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.074721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.074731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.075811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.075821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.076140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.076151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.076366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.076376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.076756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.076768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.077008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.077017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.077413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.077423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.077692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.077702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.077951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.077961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.078299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.078313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.078596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.078606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.079000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.079017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.079339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.079349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.079668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.079677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.079989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.079998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.080325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.080335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.080676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.080685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.080980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.080989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.081306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.081317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.081607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.081616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.081930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.081939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.082210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.082221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.082515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.082524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.082838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.082848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.083033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.083044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.083351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.083360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.083680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.083690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.084021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.084031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.084332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.084341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.640 qpair failed and we were unable to recover it. 00:38:16.640 [2024-06-07 14:40:40.084643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.640 [2024-06-07 14:40:40.084653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.084866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.084876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.085177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.085186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.085597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.085608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.085988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.085998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.086313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.086323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.086653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.086662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.086979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.086988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.087273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.087283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.087576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.087585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.087907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.087916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.088219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.088229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.088547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.088556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.088852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.088862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.089041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.089051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.089386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.089396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.089728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.089748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.090048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.090057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.090368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.090378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.090687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.090697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.091046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.091056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.091386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.091396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.091569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.091578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.092028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.092038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.092370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.092380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.092689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.092698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.092989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.641 [2024-06-07 14:40:40.092998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.641 qpair failed and we were unable to recover it. 00:38:16.641 [2024-06-07 14:40:40.093336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.093346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.093576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.093585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.093891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.093900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.094115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.094125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.095050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.095072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.095497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.095509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.095823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.095836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.096211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.096221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.096558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.096567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.096886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.096896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.097206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.097217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.097507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.097517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.097833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.097842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.098062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.098071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.098399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.098409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.098689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.098698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.098887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.098899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.099224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.099234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.099549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.099559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.099830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.099839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.100198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.100208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.100502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.100512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.100823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.642 [2024-06-07 14:40:40.100832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.642 qpair failed and we were unable to recover it. 00:38:16.642 [2024-06-07 14:40:40.101107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.101116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.101319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.101328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.101666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.101676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.101976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.101986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.103025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.103044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.103341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.103352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.104190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.104215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.104615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.104626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.104844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.104853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.105155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.105164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.105386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.105396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.105591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.105601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.105821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.105830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.106046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.106055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.106355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.106365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.106680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.106690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.107079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.107089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.107387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.107403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.107684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.107694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.107993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.108003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.108319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.108331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.108638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.108648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.108929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.108938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.109256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.109272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.109671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.109682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.109882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.109892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.643 [2024-06-07 14:40:40.110211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.643 [2024-06-07 14:40:40.110221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.643 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.110556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.110565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.110846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.110855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.111035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.111045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.111423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.111433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.111684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.111694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.111926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.111935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.112225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.112234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.112478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.112487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.112780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.112791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.113145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.113154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.113323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.113333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.113577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.113586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.113888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.113898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.114226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.114237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.114555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.114564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.114758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.114767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.115053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.115062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.115283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.115292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.115639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.115648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.115932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.115942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.116278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.116288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.116656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.116665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.116984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.116993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.117321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.117331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.117631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.117641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.644 qpair failed and we were unable to recover it. 00:38:16.644 [2024-06-07 14:40:40.117955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.644 [2024-06-07 14:40:40.117964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.118272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.118282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.118662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.118672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.118870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.118880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.119221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.119231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.119543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.119552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.119875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.119885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.120207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.120217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.120540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.120550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.120850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.120860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.121168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.121177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.121497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.121507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.121819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.121828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.122021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.122030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.122247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.122259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.122576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.122586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.122904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.122913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.123231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.123241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.123454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.123463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.123783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.123792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.124129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.124137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.124345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.124355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.124582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.124592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.124893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.124903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.125266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.125276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.645 qpair failed and we were unable to recover it. 00:38:16.645 [2024-06-07 14:40:40.125553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.645 [2024-06-07 14:40:40.125562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.125915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.125925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.126246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.126255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.126609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.126618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.126949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.126959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.127264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.127273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.127612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.127622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.127834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.127843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.128129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.128138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.128565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.128574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.128839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.128848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.129152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.129163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.129471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.129481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.129792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.129801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.130092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.130101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.130317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.130327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.130643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.130652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.130977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.130987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.131312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.131322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.131628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.131638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.131974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.131983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.132275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.132284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.132607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.132617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.132914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.132924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.646 [2024-06-07 14:40:40.133151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.646 [2024-06-07 14:40:40.133161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.646 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.133506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.133516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.133700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.133709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.133924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.133933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.134129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.134138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.134471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.134480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.134662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.134671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.135030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.135040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.135373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.135382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.135701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.135710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.136010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.136019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.136258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.136267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.136605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.136614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.136950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.136960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.137307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.137319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.137581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.137590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.137907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.137916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.138304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.138313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.138615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.138625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.138952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.138963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.139280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.139290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.139628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.139639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.139982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.139994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.140241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.140251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.140583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.140593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.140934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.647 [2024-06-07 14:40:40.140944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.647 qpair failed and we were unable to recover it. 00:38:16.647 [2024-06-07 14:40:40.141257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.141269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.141604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.141615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.141959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.141970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.142320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.142332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.142619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.142629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.142805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.142815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.143026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.143036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.143259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.143270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.143579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.143589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.143900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.143910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.144267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.144279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.144594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.144604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.145014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.145024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.145243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.145253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.145340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.145352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.145685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.145698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.146009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.146020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.146234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.146246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.146622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.146633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.146943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.146955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.147276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.648 [2024-06-07 14:40:40.147287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.648 qpair failed and we were unable to recover it. 00:38:16.648 [2024-06-07 14:40:40.147673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.147683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.148011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.148022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.148296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.148307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.148498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.148508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.148837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.148848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.149082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.149092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.149423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.149434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.149754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.149765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.150166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.150177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.150559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.150570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.150758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.150769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.151003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.151013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.151360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.151370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.151700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.151711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.151943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.151954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.152270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.152281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.152583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.152594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.152897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.152907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.153220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.153230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.153549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.153559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.153879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.153890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.154233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.154244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.154615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.154625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.154943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.154953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.155276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.155286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.155504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.155514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.155842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.155853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.156144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.156155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.156524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.156535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.649 [2024-06-07 14:40:40.156758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.649 [2024-06-07 14:40:40.156767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.649 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.156962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.156975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.157179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.157190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.157375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.157385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.157719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.157730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.157945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.157955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.158227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.158241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.158568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.158578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.158735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.158745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.159037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.159047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.159292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.159302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.159607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.159618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.159842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.159852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.160161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.160172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.160381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.160393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.160721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.160732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.161116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.161127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.161460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.161471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.161669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.161681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.161988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.161998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.162201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.162213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.162514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.162525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.162846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.162857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.163073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.163082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.163374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.163385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.163730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.163741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.164086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.164097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.164397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.164408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.164702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.164714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.164926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.164936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.165257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.165267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.165556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.165566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.165878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.165888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.166205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.166217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.166457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.166467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.650 [2024-06-07 14:40:40.166730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.650 [2024-06-07 14:40:40.166740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.650 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.166950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.166960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.167304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.167316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.167652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.167662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.167887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.167897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.168207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.168217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.168520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.168530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.168745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.168755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.169040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.169050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.169402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.169413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.169704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.169715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.170037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.170047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.170351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.170362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.170684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.170695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.171034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.171044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.171433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.171443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.171653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.171662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.171986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.171996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.172332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.172342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.172551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.172562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.172773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.172782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.173086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.173097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.173423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.173433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.173771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.173781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.174097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.174108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.174329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.174341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.174664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.174676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.175008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.175018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.175305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.175316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.175648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.175659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.175983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.175995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.176315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.176326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.176619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.176629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.176958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.176968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.177164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.177173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.177517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.177527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.177841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.177852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.178159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.178169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.651 [2024-06-07 14:40:40.178509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.651 [2024-06-07 14:40:40.178521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.651 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.178848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.178859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.179050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.179061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.179223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.179234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.179447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.179457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.179636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.179648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.179825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.179835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.180057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.180067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.180431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.180442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.180749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.180759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.181077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.181088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.181494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.181505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.181816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.181827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.182136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.182147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.182539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.182550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.182870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.182882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.183091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.183102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.183455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.183467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.183790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.183800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.184113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.184123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.184453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.184463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.184807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.184818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.185121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.185131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.185353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.185363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.185682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.185693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.185990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.186001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.186277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.186287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.186465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.186474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.186807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.186818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.187122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.187133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.187492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.187503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.187820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.187831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.188053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.652 [2024-06-07 14:40:40.188063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.652 qpair failed and we were unable to recover it. 00:38:16.652 [2024-06-07 14:40:40.188507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.188518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.188800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.188810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.188979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.188990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.189342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.189353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.189672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.189683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.189994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.190004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.190289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.190299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.190628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.190638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.190837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.190846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.191041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.191052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.191261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.191272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.191521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.191531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.191928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.191938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.192147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.192157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.192422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.192433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.192773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.192784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.193107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.193117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.193449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.193460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.193765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.193776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.193955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.193965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.194271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.194282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.194607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.194618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.194940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.194952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.195291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.195302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.195623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.195633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.195956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.195967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.196338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.196348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.653 qpair failed and we were unable to recover it. 00:38:16.653 [2024-06-07 14:40:40.196555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.653 [2024-06-07 14:40:40.196566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.196851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.196861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.197182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.197201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.197493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.197505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.197816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.197827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.198038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.198048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.198352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.198363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.198664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.198676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.198994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.199005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.199238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.199248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.199492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.199503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.199839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.199849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.200137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.200148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.200458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.200469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.200800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.200811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.201111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.201121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.201426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.201437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.201744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.201754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.201942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.654 [2024-06-07 14:40:40.201952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.654 qpair failed and we were unable to recover it. 00:38:16.654 [2024-06-07 14:40:40.202271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.202282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.202567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.202577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.202887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.202897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.203237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.203249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.203441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.203451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.203647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.203657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.203824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.203834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.204041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.204051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.204344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.204355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.204564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.204574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.204908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.204919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.205121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.205131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.205445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.205457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.205790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.205800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.206119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.206129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.206329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.206340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.206599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.206609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.206962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.206972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.207200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.207210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.207482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.207492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.207660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.207669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.208022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.208033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.208365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.208375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.208701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.208711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.208933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.208943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.655 qpair failed and we were unable to recover it. 00:38:16.655 [2024-06-07 14:40:40.209230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.655 [2024-06-07 14:40:40.209242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.209529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.209539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.209837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.209847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.210057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.210067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.210308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.210318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.210627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.210638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.210955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.210966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.211274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.211285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.211605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.211616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.211931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.211942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.212230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.212241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.212579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.212590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.212909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.212920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.213213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.213223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.213535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.213546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.213860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.213871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.214198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.214210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.214630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.214641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.214953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.214964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.215261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.215272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.215565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.215575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.215907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.215918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.216243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.216254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.216588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.216598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.656 [2024-06-07 14:40:40.216803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.656 [2024-06-07 14:40:40.216813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.656 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.217119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.217129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.217445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.217458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.217634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.217644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.217985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.217995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.218218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.218228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.218420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.218431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.218760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.218771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.219084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.219094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.219402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.219414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.219722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.219732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.220048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.220058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.220372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.220383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.220614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.220624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.220940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.220950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.221329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.221340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.221652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.221662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.221888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.221899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.221987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.221997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.222317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.222328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.222655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.222665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.223008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.223018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.223242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.223256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.223363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.223374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.223646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.223657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.657 qpair failed and we were unable to recover it. 00:38:16.657 [2024-06-07 14:40:40.223976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.657 [2024-06-07 14:40:40.223988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.224286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.224297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.224721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.224731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.225032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.225042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.225403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.225413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.225708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.225719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.226046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.226057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.226466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.226477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.226655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.226665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.226956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.226966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.227306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.227316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.227620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.227630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.227948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.227959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.228289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.228300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.228651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.228661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.228994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.229004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.229238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.229249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.229602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.229612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.229795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.229805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.230122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.230133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.230471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.230482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.230676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.230687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.231019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.231029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.231380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.231390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.658 [2024-06-07 14:40:40.231721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.658 [2024-06-07 14:40:40.231733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.658 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.231950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.231960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.232290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.232301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.232493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.232504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.232824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.232834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.233156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.233166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.233470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.233481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.233826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.233837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.234148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.234159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.234461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.234473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.234668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.234679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.234974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.234985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.235180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.235191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.235395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.235406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.235723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.235735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.236026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.236037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.236246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.236257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.236570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.236581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.236887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.236897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.237200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.237210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.237569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.237579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.237797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.237806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.238140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.238151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.238473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.238483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.238790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.238800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.239139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.659 [2024-06-07 14:40:40.239149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.659 qpair failed and we were unable to recover it. 00:38:16.659 [2024-06-07 14:40:40.239344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.239354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.239694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.239706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.240071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.240082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.240428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.240439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.240746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.240756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.240960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.240970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.241284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.241296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.241560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.241570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.241877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.241887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.242179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.242190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.242420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.242431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.242617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.242627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.242823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.242834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.243143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.243154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.243472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.243482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.243687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.243697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.243956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.243967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.244082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.244093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.244304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.244316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.244625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.244636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.244977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.244988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.245339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.245350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.245655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.245666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.246001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.246011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.246216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.246225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.246545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.246555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.246836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.660 [2024-06-07 14:40:40.246846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.660 qpair failed and we were unable to recover it. 00:38:16.660 [2024-06-07 14:40:40.247169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.247179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.247372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.247383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.247559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.247571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.247884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.247895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.248090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.248100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.248415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.248428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.248706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.248716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.249004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.249014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.249234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.249244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.249571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.249581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.249913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.249924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.250231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.250241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.250529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.250541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.250853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.250863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.251177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.251188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.251423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.251434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.251757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.251768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.252087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.252098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.252438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.661 [2024-06-07 14:40:40.252449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.661 qpair failed and we were unable to recover it. 00:38:16.661 [2024-06-07 14:40:40.252759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.252770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.253008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.253019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.253229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.253241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.253451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.253461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.253679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.253688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.254020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.254031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.254343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.254355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.254693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.254703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.255040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.255050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.255372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.255382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.255731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.255742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.256071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.256082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.256324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.256335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.256603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.256613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.256820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.257137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.257148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.257346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.257357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.257655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.257665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.257878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.257888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.258247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.258258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.258454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.258465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.258668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.258679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.259016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.259026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.259238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.259251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.259558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.259569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.259667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.259677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.259944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.259954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.260253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.260264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.260597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.260607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.260918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.260928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.261233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.662 [2024-06-07 14:40:40.261244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.662 qpair failed and we were unable to recover it. 00:38:16.662 [2024-06-07 14:40:40.261562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.261573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.261695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.261705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.262013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.262023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.262357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.262368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.262680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.262690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.263028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.263038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.263357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.263367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.263702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.263713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.263902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.263912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.663 [2024-06-07 14:40:40.264237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.663 [2024-06-07 14:40:40.264248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.663 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.264566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.264578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.264748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.264759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.265101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.265112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.265320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.265331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.265654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.265664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.265971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.265982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.266316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.266327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.266656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.266666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.266947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.266958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.267277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.267290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.267495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.267505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.267802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.267812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.268119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.268129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.268423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.268433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.268767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.268778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.269088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.269099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.269422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.269433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.269748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.269759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.270066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.270077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.270283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.270293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.270671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.270682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.270859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.270870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.271154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.271165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.271522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.271532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.271622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.271631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.271945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.271956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.272228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.272238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.272549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.272560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.940 [2024-06-07 14:40:40.272771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.940 [2024-06-07 14:40:40.272782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.940 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.272978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.272988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.273167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.273178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.273497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.273508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.273816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.273828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.274049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.274060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.274375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.274386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.274677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.274688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.275017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.275028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.275239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.275250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.275593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.275604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.275930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.275942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.276259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.276270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.276618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.276628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.276940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.276951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.277324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.277335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.277643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.277655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.277946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.277958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.278275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.278285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.278595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.278605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.278951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.278962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.279250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.279261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.279557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.279569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.279914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.279926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.280235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.280245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.280581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.280591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.280812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.280824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.281048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.281059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.281319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.281330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.281644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.281655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.281969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.281981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.282336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.282347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.282530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.282542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.282838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.282848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.283152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.283163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.283545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.283556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.941 [2024-06-07 14:40:40.283886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.941 [2024-06-07 14:40:40.283897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.941 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.284206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.284217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.284519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.284530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.284922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.284932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.285217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.285227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.285551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.285561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.285870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.285881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.286160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.286170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.286442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.286452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.286728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.286739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.287078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.287089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.287275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.287288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.287604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.287615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.287950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.287963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.288286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.288297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.288472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.288484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.288799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.288810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.289030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.289041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.289257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.289268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.289618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.289631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.289938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.289949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.290170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.290181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.290483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.290494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.290790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.290801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.291141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.291151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.291402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.291412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.291623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.291634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.291934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.291945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.292272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.292282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.292620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.292630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.292973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.292984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.293316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.293326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.293658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.293669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.294026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.294037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.294261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.294271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.294587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.294597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.942 [2024-06-07 14:40:40.294927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.942 [2024-06-07 14:40:40.294937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.942 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.295229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.295240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.295536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.295548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.295742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.295753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.296075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.296088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.296310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.296321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.296554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.296564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.296888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.296899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.297188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.297205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.297504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.297515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.297837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.297847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.298136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.298145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.298325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.298336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.298702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.298712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.299043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.299054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.299356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.299366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.299692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.299702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.299993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.300004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.300292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.300303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.300611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.300623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.300932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.300943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.301269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.301281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.301587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.301598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.301937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.301947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.302303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.302315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.302609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.302619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.302909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.302921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.303228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.303238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.303530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.303541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.303812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.303823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.304156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.304167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.304489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.304502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.304777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.304789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.305096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.305107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.305419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.305431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.305619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.943 [2024-06-07 14:40:40.305630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.943 qpair failed and we were unable to recover it. 00:38:16.943 [2024-06-07 14:40:40.305939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.305950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.306308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.306318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.306631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.306642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.306949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.306959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.307278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.307290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.307585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.307595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.307759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.307770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.308065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.308075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.308413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.308424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.308642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.308653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.308981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.308991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.309188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.309200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.309427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.309438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.309773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.309783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.310004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.310014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.310358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.310369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.310689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.310700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.311020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.311031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.311360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.311372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.311671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.311682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.311893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.311903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.312251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.312262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.312567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.312578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.312905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.312916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.313231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.313242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.313602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.313613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.313921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.313933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.314367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.314378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.314595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.314605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.314806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.314816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.315042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.315052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.315363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.315374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.315675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.315686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.315902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.315913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.316122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.944 [2024-06-07 14:40:40.316132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.944 qpair failed and we were unable to recover it. 00:38:16.944 [2024-06-07 14:40:40.316378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.316389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.316708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.316719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.317040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.317050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.317318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.317329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.317663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.317674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.318006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.318016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.318342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.318353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.318602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.318612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.318920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.318932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.319216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.319227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.319523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.319534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.319758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.319768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.319948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.319958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.320268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.320280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.320503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.320513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.320824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.320835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.321057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.321066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.321377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.321388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.321733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.321744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.322135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.322146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.322462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.322474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.322787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.322797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.323097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.323109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.323400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.323411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.323744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.323754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.324087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.324098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.324446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.324457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.324792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.324804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.325111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.325124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.945 [2024-06-07 14:40:40.325365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.945 [2024-06-07 14:40:40.325376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.945 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.325540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.325551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.325888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.325899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.326206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.326218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.326544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.326555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.326743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.326754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.327081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.327091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.327430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.327442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.327754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.327764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.328079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.328090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.328295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.328306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.328645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.328655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.328991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.329003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.329331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.329343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.329636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.329647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.329936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.329946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.330204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.330215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.330525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.330535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.330843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.330854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.331221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.331231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.331546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.331558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.331771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.331781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.332007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.332017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.332340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.332351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.332671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.332682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.332976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.332986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.333175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.333188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.333533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.333544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.333758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.333769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.334099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.334110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.334417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.334429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.334636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.334647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.334973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.334984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.335385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.335396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.335721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.335732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.336046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.336056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.336384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.336396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.336751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.336762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.946 [2024-06-07 14:40:40.336981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.946 [2024-06-07 14:40:40.336991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.946 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.337178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.337189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.337605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.337616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.337802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.337812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.338145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.338156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.338488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.338499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.338832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.338843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.339087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.339098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.339457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.339467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.339676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.339686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.339981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.339991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.340210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.340221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.340559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.340570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.340913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.340925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.341253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.341263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.341599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.341609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.341915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.341926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.342243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.342254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.342565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.342576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.342862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.342872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.343072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.343082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.343405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.343416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.343732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.343743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.344049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.344059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.344361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.344372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.344703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.344713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.345025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.345036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.345366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.345377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.345664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.345674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.345898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.345908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.346248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.346259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.346561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.346572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.346705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.346717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.347255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.347345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.347775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.347810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.348040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.348067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.348453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.348465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.348765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.348776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.947 qpair failed and we were unable to recover it. 00:38:16.947 [2024-06-07 14:40:40.348963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.947 [2024-06-07 14:40:40.348975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.349228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.349239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.349459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.349470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.349809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.349819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.350139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.350151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.350516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.350527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.350886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.350897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.351291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.351302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.351526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.351536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.351869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.351880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.352073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.352082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.352336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.352347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.352521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.352530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.352841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.352852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.353189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.353204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.353433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.353443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.353738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.353749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.354091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.354101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.354421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.354433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.354747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.354757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.355072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.355083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.355404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.355415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.355633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.355643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.355865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.355877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.356070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.356081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.356311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.356322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.356660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.356671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.356979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.356998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.357306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.357317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.357635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.357646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.357988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.357999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.358207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.358217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.358503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.358513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.358709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.358719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.359036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.359046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.359373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.359385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.359699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.359710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.359890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.948 [2024-06-07 14:40:40.359900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.948 qpair failed and we were unable to recover it. 00:38:16.948 [2024-06-07 14:40:40.360215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.360226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.360611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.360621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.360830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.360839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.361141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.361152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.361369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.361379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.361664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.361674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.361990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.362001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.362317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.362330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.362654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.362665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.363000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.363010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.363350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.363360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.363682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.363692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.363891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.363901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.364265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.364276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.364430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.364440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.364693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.364704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.364979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.364989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.365366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.365377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.365648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.365658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.365853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.365863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.366287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.366298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.366634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.366645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.366967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.366979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.367322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.367332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.367619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.367631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.368010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.368020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.368285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.368296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.368599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.368609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.368926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.368937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.369258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.369268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.369636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.949 [2024-06-07 14:40:40.369647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.949 qpair failed and we were unable to recover it. 00:38:16.949 [2024-06-07 14:40:40.369985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.369996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.370315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.370326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.370645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.370657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.370971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.370983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.371318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.371329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.371645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.371656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.372002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.372012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.372346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.372359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.372702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.372713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.372925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.372935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.373252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.373263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.373633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.373644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.373946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.373957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.374240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.374251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.374584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.374595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.374925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.374937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.375116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.375127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.375343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.375353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.375669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.375679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.376009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.376020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.376354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.376365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.376704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.376714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.377015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.377026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.377353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.377364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.377736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.377747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.378078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.378089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.378449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.378459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.378789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.378800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.379108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.379119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.379455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.379467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.379799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.379809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.380115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.380126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.380441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.380452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.380763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.380774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.381060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.381071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.381420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.381431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.381758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.381769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.382100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.382110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.950 qpair failed and we were unable to recover it. 00:38:16.950 [2024-06-07 14:40:40.382425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.950 [2024-06-07 14:40:40.382437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.382741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.382751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.383087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.383098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.383414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.383424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.383757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.383768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.384132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.384143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.384423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.384434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.384601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.384612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.384919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.384929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.385249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.385260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.385641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.385652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.385930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.385940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.386265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.386276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.386600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.386611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.386946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.386958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.387289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.387299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.387617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.387628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.387922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.387932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.388217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.388227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.388524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.388535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.388845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.388856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.389118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.389129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.389438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.389449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.389769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.389779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.390124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.390135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.390466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.390477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.390695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.390705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.391015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.391025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.391304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.391315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.391636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.391646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.391847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.391857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.392191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.392207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.392431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.392441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.392821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.392834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.393148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.393159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.393470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.393481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.393786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.393797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.951 [2024-06-07 14:40:40.394128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.951 [2024-06-07 14:40:40.394139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.951 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.394452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.394463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.394779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.394790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.395118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.395128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.395309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.395319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.395431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.395441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.395746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.395758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.396100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.396111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.396439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.396450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.396654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.396664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.397026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.397037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.397373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.397384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.397566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.397576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.397950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.397961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.398148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.398157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.398342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.398352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.398573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.398583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.398906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.398917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.399227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.399237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.399562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.399573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.399888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.399899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.400206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.400217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.400394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.400405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.400705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.400717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.401050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.401060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.401391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.401402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.401596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.401606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.401783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.401794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.402061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.402072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.402432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.402443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.402634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.402644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.402969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.402980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.403259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.403270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.403571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.403581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.403906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.403917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.404265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.404277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.404455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.404466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.404881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.404892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.405214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.952 [2024-06-07 14:40:40.405226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.952 qpair failed and we were unable to recover it. 00:38:16.952 [2024-06-07 14:40:40.405412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.405423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.405778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.405789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.405995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.406006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.406188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.406202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.406395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.406406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.406707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.406718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.406932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.406942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.407272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.407283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.407613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.407625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.407793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.407803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.408186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.408204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.408504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.408518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.408858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.408869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.409226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.409237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.409586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.409597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.409910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.409920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.410255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.410266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.410609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.410620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.410931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.410942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.411253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.411265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.411606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.411616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.411911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.411931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.412234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.412245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.412657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.412667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.412966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.412976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.413278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.413290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.413639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.413650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.413872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.413883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.414116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.414127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.414378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.414390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.414721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.414732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.415047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.415058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.415277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.415289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.415623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.415635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.415818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.415828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.416136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.416147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.416382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.416393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.416758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.953 [2024-06-07 14:40:40.416768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.953 qpair failed and we were unable to recover it. 00:38:16.953 [2024-06-07 14:40:40.417087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.417097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.417404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.417415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.417615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.417625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.417956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.417966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.418290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.418301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.418522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.418532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.418834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.418844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.419170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.419180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.419601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.419612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.419931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.419942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.420265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.420276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.420602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.420613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.420991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.421001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.421331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.421342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.421681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.421692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.422032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.422042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.422374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.422385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.422725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.422736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.423033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.423045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.423324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.423335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.423507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.423518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.423825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.423835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.424089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.424099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.424332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.424342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.424673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.424684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.425017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.425028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.425354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.425364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.425437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.425447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.425741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.425751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.425956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.425966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.426161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.426171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.426498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.954 [2024-06-07 14:40:40.426509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.954 qpair failed and we were unable to recover it. 00:38:16.954 [2024-06-07 14:40:40.426853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.426864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.427171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.427182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.427405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.427415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.427494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.427503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.427855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.427865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.428169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.428179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.428471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.428482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.428799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.428810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.429079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.429090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.429172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.429185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.429303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.429313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.429643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.429653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.429986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.429997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.430319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.430329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.430423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.430433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.430644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.430655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.430965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.430975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.431188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.431201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.431503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.431514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.431796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.431806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.432113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.432123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.432500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.432511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.432852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.432862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.433070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.433081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.433420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.433432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.433632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.433642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.433958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.433969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.434270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.434281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.434613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.434624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.434931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.434942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.435271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.435281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.435488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.435498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.435823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.435834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.436033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.436044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.436338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.436349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.436579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.436589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.436830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.436842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.437081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.437092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.955 qpair failed and we were unable to recover it. 00:38:16.955 [2024-06-07 14:40:40.437458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.955 [2024-06-07 14:40:40.437469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.437763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.437773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.438138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.438149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.438469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.438480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.438790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.438801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.439087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.439098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.439397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.439408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.439712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.439722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.440027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.440038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.440341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.440351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.440665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.440675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.440984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.440995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.441327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.441337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.441633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.441644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.441973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.441984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.442300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.442311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.442671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.442682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.443023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.443033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.443305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.443316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.443617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.443627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.443960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.443970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.444283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.444295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.444701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.444712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.444949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.444961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.445292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.445302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.445687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.445698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.445990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.446002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.446213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.446224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.446432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.446443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.446756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.446767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.447130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.447141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.447267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.447277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.447697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.447708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.447924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.447935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.448285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.448295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.448611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.448623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.448829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.448840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.449227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.449238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.956 [2024-06-07 14:40:40.449580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.956 [2024-06-07 14:40:40.449590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.956 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.449924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.449935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.450124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.450134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.450498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.450509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.450851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.450861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.451185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.451208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.451563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.451574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.451879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.451890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.452354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.452365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.452750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.452761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.453060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.453071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.453393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.453405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.453789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.453800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.454107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.454118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.454430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.454442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.454786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.454797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.455126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.455136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.455372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.455382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.455723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.455733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.456112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.456123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.456272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.456284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.456567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.456578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.456886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.456898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.457201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.457213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.457566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.457576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.457795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.457805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.458137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.458148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.458306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.458317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.458699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.458713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.459029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.459040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.459237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.459248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.459608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.459619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.459811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.459823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.460076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.460087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.460437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.460448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.460785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.460796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.461148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.461159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.957 [2024-06-07 14:40:40.461389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.957 [2024-06-07 14:40:40.461400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.957 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.461721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.461732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.462027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.462037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.462521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.462532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.462852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.462863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.463201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.463211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.463535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.463545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.463737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.463747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.464044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.464054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.464309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.464319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.464654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.464665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.465004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.465016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.465269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.465281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.465594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.465605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.465926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.465937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.466126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.466137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.466559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.466570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.466780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.466790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.466994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.467007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.467229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.467239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.467523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.467533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.467846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.467856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.468168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.468179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.468514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.468525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.468861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.468872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.469242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.469254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.469645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.469656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.469959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.469970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.470269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.470280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.470624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.470635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.470822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.470832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.471165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.471176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.471510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.471520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.471727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.471737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.471916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.471927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.472214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.472226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.472422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.472433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.958 qpair failed and we were unable to recover it. 00:38:16.958 [2024-06-07 14:40:40.472615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.958 [2024-06-07 14:40:40.472626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.472929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.472941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.473234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.473248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.473418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.473428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.473707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.473718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.474034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.474045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.474218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.474228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.474454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.474465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.474774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.474787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.475175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.475186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.475499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.475511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.475843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.475853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.476150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.476162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.476508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.476518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.476826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.476837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.477211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.477221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.477552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.477563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.477902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.477913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.478253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.478264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.478561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.478571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.478886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.478895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.479125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.479135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.479443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.479453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.479773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.479784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.480033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.480043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.480350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.480361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.480689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.480699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.481031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.481042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.481376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.481387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.481700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.481710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.482004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.482014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.482342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.959 [2024-06-07 14:40:40.482353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.959 qpair failed and we were unable to recover it. 00:38:16.959 [2024-06-07 14:40:40.482714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.482724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.483104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.483115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.483407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.483418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.483752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.483762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.484042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.484053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.484357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.484368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.484583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.484594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.484929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.484940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.485253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.485265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.485587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.485598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.485896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.485908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.486290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.486301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.486503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.486513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.486730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.486741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.487070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.487081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.487397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.487408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.487764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.487775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.488118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.488128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.488452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.488463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.488839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.488849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.489165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.489176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.489367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.489377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.489696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.489707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.490014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.490025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.490361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.490372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.490683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.490694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.490855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.490867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.491174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.491185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.491334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.491344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.491656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.491667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.491974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.491985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.492335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.492346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.492656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.492667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.492964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.492974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.493292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.493304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.493712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.493723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.960 [2024-06-07 14:40:40.493985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.960 [2024-06-07 14:40:40.493995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.960 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.494273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.494284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.494607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.494618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.494929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.494939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.495252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.495262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.495584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.495594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.495969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.495979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.496319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.496330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.496656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.496669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.497013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.497025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.497347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.497357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.497577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.497587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.497914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.497924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.498270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.498281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.498617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.498628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.498940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.498951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.499270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.499282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.499604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.499614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.499879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.499889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.500272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.500282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.500471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.500482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.500690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.500701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.500923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.500933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.501256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.501267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.501563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.501574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.501897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.501908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.502287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.502298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.502622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.502633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.502835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.502845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.503187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.503199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.503574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.503585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.503916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.503926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.504189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.504202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.504538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.504550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.504863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.504876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.505089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.505101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.505322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.505332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.961 [2024-06-07 14:40:40.505654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.961 [2024-06-07 14:40:40.505665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.961 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.505990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.506001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.506304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.506315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.506643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.506654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.506978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.506988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.507275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.507285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.507567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.507577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.507895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.507906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.508089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.508100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.508457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.508468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.508761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.508771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.509089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.509099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.509430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.509440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.509761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.509771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.510092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.510102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.510325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.510335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.510712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.510722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.511034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.511044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.511362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.511373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.511703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.511714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.512091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.512102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.512439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.512451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.512817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.512827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.513140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.513151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.513301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.513312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.513637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.513647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.513964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.513974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.514318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.514329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.514691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.514702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.514977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.514996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.515310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.515321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.515647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.515657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.515841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.515851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.516157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.516167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.516286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.516296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.516622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.516633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.962 qpair failed and we were unable to recover it. 00:38:16.962 [2024-06-07 14:40:40.516984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.962 [2024-06-07 14:40:40.516995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.517311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.517321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.517499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.517509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.517819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.517831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.518161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.518172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.518398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.518409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.518561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.518572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.518885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.518895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.519236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.519246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.519437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.519447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.519647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.519658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.520026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.520036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.520363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.520373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.520704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.520714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.521001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.521012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.521268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.521279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.521619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.521629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.521938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.521949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.522274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.522285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.522627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.522637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.522932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.522942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.523222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.523232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.523546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.523557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.523861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.523871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.524095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.524105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.524496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.524507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.524791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.524802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.525111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.525123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.525361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.525371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.525552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.525562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.525903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.525917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.526107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.526118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.526443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.526455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.526651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.526663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.527021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.963 [2024-06-07 14:40:40.527031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.963 qpair failed and we were unable to recover it. 00:38:16.963 [2024-06-07 14:40:40.527376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.527387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.527754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.527765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.527991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.528002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.528338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.528349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.528560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.528570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.528910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.528920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.528976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.528986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.529247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.529257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.529583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.529594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.529899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.529910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.530237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.530248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.530582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.530592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.530716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.530725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.531038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.531048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.531209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.531219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.531557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.531567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.531890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.531901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.532209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.532219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.532536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.532546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.532858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.532869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.533204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.533215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.533542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.533553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.533852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.533866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.534181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.534192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.534574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.534585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.534897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.534908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.535121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.535130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.535451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.535462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.535771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.535782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.535959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.535970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.536184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.536198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.536409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.536420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.536644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.964 [2024-06-07 14:40:40.536654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.964 qpair failed and we were unable to recover it. 00:38:16.964 [2024-06-07 14:40:40.536913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.536924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.537257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.537270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.537581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.537593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.537934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.537944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.538271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.538283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.538607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.538617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.538935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.538945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.539264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.539275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.539504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.539514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.539727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.539736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.540091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.540101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.540425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.540437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.540765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.540776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.540985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.540995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.541290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.541301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.541648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.541659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.541967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.541981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.542210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.542221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.542493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.542503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.542834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.542844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.543176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.543186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.543412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.543422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.543814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.543824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.544131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.544143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.544349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.544361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.544698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.544710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.545091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.545101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.545427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.545439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.545753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.545763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.546091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.546101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.546505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.546516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.546807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.546819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.547025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.547035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.547393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.547404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.547653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.547663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.547980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.547990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.548284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.548295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.548640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.548651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.965 qpair failed and we were unable to recover it. 00:38:16.965 [2024-06-07 14:40:40.548964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.965 [2024-06-07 14:40:40.548975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.549227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.549237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.549563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.549574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.549888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.549899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.550222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.550233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.550469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.550479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.550665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.550676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.551018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.551029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.551395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.551406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.551729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.551740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.551931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.551941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.552274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.552285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.552609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.552620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.552955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.552965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.553232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.553242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.553444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.553454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.553777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.553788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.554095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.554106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.554349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.554361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.554691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.554703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.555034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.555043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.555247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.555258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.555575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.555585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.555922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.555933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.556296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.556307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.556634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.556970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.556981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.557166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.557177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.557510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.557522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.557834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.557845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.558251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.558263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.558580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.558591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.558923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.558934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.559251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.559262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.559598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.559609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.559932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.560248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.560260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.560450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.966 [2024-06-07 14:40:40.560463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.966 qpair failed and we were unable to recover it. 00:38:16.966 [2024-06-07 14:40:40.560830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.560842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.561196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.561208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.561524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.561535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.561742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.561753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.561976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.561988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.562333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.562344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.562660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.562671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.562995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.563006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.563350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.563364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.563684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.563695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.564012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.564023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.564351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.564362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.564685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.564697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.565033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.565045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.565368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.565379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.565604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.565615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.565907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.565918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.566248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.566260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.566451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.566464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.566741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.566753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.567063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.567075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.567421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.567433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.567784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.567795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.568112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.568123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.568273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.568284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.568563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.568574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.568885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.568896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.569207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.569219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.569521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.569533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.569829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.569840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.570173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.570185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.570393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.570405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.570589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.570600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.570919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.570931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.571151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.571162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:16.967 [2024-06-07 14:40:40.571255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:16.967 [2024-06-07 14:40:40.571267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:16.967 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.571529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.571542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.571870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.571882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.572104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.572116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.572353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.572365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.572668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.572679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.573065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.573076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.573393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.573405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.573685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.573696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.574029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.574041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.574259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.574270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.574537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.574547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.574883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.574893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.575231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.575242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.575582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.575592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.575905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.575915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.576210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.576222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.576440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.576450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.576780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.576790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.577117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.577129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.577386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.577396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.577701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.577712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.578047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.578057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.578371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.578383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.578681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.578692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.578997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.579009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.579304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.579315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.579532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.579542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.579748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.579760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.580074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.580084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.580386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.580397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.580703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.580713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.581009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.581019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.581301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.581311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.581597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.581607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.581937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.581947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.245 [2024-06-07 14:40:40.582274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.245 [2024-06-07 14:40:40.582285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.245 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.582617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.582628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.582941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.582952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.583134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.583144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.583484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.583496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.583700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.583711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.583911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.583921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.584222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.584233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.584663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.584674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.584884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.584894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.585201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.585212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.585509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.585521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.585871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.585881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.586246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.586257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.586457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.586468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.586747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.586757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.586968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.586978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.587227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.587238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.587587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.587598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.587777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.587789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.588125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.588136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.588442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.588453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.588783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.588793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.589099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.589110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.589424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.589435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.589734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.589745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.590068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.590078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.590493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.590504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.590876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.590886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.591249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.591260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.591485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.591495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.591712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.591722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.591902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.591914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.592142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.592153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.592361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.592372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.592658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.592669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.592978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.592988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.246 qpair failed and we were unable to recover it. 00:38:17.246 [2024-06-07 14:40:40.593347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.246 [2024-06-07 14:40:40.593360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.593745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.593757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.594073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.594085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.594384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.594395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.594762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.594772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.595145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.595156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.595471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.595483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.595685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.595695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.596006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.596026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.596250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.596261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.596504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.596514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.596898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.596908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.597218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.597229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.597586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.597597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.597923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.597934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.598266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.598276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.598598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.598608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.598852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.598862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.599158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.599168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.599385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.599395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.599723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.599734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.600108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.600118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.600432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.600446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.600791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.600801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.601115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.601126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.601463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.601475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.601799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.601810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.602151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.602162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.602481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.602493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.602806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.602816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.603105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.603117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.603438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.603449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.603758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.603769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.604084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.604094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.604501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.604512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.604817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.604829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.605057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.605068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.247 [2024-06-07 14:40:40.605282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.247 [2024-06-07 14:40:40.605292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.247 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.605649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.605659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.606005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.606016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.606208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.606219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.606598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.606609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.606897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.606907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.607250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.607261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.607555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.607565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.607867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.607877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.608190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.608206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.608524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.608534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.608840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.608852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.609070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.609082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.609373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.609384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.609697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.609707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.610041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.610051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.610434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.610445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.610757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.610768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.611076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.611086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.611312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.611322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.611626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.611637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.611955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.611965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.612324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.612335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.612668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.612679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.612989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.612999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.613316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.613327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.613544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.613554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.613889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.613899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.614232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.614243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.614562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.614573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.614801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.614811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.615146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.615156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.615410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.615420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.615735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.615745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.616088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.616098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.616293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.616303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.616655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.616666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.617031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.248 [2024-06-07 14:40:40.617042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.248 qpair failed and we were unable to recover it. 00:38:17.248 [2024-06-07 14:40:40.617358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.617369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.617690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.617701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.617884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.617895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.618235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.618246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.618485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.618496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.618818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.618829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.619205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.619216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.619530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.619541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.619827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.619838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.620152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.620162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.620494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.620505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.620671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.620680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.620885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.620895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.621239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.621249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.621506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.621516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.621831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.621844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.622158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.622169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.622494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.622505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.622829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.622840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.623152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.623162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.623239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.623251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.623642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.623653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.624027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.624038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.624421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.624431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.624750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.624761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.625094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.625106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.625329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.625339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.625626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.625636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.625952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.625962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.626296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.626306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.626665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.626676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.249 [2024-06-07 14:40:40.626843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.249 [2024-06-07 14:40:40.626855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.249 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.627173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.627184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.627505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.627516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.627851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.627861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.628217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.628229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.628653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.628663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.628965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.628976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.629320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.629331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.629650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.629662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.629976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.629986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.630186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.630199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.630370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.630382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.630696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.630707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.631018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.631029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.631353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.631364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.631667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.631678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.631983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.631993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.632205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.632215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.632579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.632590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.632930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.632940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.633244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.633255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.633573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.633583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.633912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.633923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.634258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.634269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.634584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.634595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.634938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.634948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.635281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.635291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.635585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.635595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.635795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.635806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.636049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.636059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.636298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.636308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.636638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.636648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.636871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.636882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.637253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.637264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.637585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.637596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.637835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.637845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.638037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.638047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.638342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.638354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.638568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.250 [2024-06-07 14:40:40.638581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.250 qpair failed and we were unable to recover it. 00:38:17.250 [2024-06-07 14:40:40.638866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.638877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.639217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.639229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.639534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.639544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.639847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.639858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.640061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.640072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.640407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.640419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.640776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.640787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.641114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.641126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.641338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.641349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.641632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.641642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.641840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.641860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.642193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.642206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.642522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.642534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.642844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.642855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.643198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.643209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.643530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.643540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.643816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.643827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.644137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.644148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.644479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.644490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.644703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.644712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.644925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.644933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.645033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.645041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.645330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.645340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.645670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.645679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.645889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.645897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.646167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.646176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.646475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.646484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.646798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.646807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.647126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.647134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.647363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.647372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.647594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.647603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.647941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.647952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.648228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.648239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.648530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.648540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.648777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.648787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.648986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.648996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.649335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.649346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.649687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.649698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.251 [2024-06-07 14:40:40.650016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.251 [2024-06-07 14:40:40.650028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.251 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.650330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.650341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.650655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.650667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.650999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.651010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.651343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.651355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.651547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.651558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.651847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.651858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.652187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.652202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.652501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.652513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.652824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.652836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.653209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.653221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.653541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.653552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.653883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.653895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.654251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.654262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.654584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.654595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.654766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.654778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.654977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.654989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.655279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.655290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.655479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.655491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.655802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.655813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.656122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.656134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.656500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.656511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.656822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.656833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.657168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.657180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.657495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.657507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.657838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.657849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.658180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.658191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.658558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.658570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.658757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.658768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.659052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.659067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.659392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.659403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.659742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.659754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.659934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.659945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.660259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.660270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.660478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.660488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.660800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.660810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.661123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.661134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.661486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.661497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.661808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.661820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.252 qpair failed and we were unable to recover it. 00:38:17.252 [2024-06-07 14:40:40.662155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.252 [2024-06-07 14:40:40.662165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.662486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.662497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.662761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.662772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.663148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.663159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.663474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.663485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.663801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.663812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.664126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.664136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.664422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.664433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.664737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.664748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.664933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.664943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.665269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.665280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.665498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.665508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.665820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.665832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.666124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.666136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.666311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.666322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.666661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.666672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.667005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.667017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.667349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.667362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.667689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.667700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.668011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.668021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.668332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.668344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.668666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.668676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.668990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.669001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.669333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.669344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.669441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.669450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.669750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.669761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.670083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.670093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.670358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.670369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.670665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.670675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.670890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.670900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.671225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.671236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.671542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.671553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.671851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.671861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.672201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.672212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.672563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.672574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.672885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.672895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.673232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.673242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.673542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.673552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.673768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.253 [2024-06-07 14:40:40.673779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.253 qpair failed and we were unable to recover it. 00:38:17.253 [2024-06-07 14:40:40.674085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.674095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.674304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.674314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.674703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.674714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.675018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.675028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.675342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.675353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.675654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.675667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.675975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.675985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.676293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.676305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.676504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.676514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.676719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.676729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.677047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.677058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.677394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.677404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.677716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.677726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.677946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.677958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.678272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.678283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.678657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.678668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.678960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.678970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.679275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.679286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.679460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.679470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.679840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.679851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.680182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.680193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.680501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.680511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.680821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.680833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.681166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.681177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.681429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.681439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.681766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.681776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.681965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.681975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.682267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.682278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.682592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.682602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.682927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.682938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.683238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.683249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.683457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.683467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.683782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.683793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.254 [2024-06-07 14:40:40.684128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.254 [2024-06-07 14:40:40.684138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.254 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.684308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.684318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.684668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.684678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.684994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.685004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.685324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.685335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.685669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.685681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.686003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.686013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.686358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.686369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.686706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.686717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.686873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.686884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.687178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.687189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.687569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.687580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.687914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.687926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.688221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.688234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.688567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.688578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.688909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.688920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.689122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.689132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.689377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.689388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.689666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.689676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.690002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.690013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.690368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.690378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.690684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.690695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.691031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.691043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.691357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.691367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.691542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.691553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.691757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.691767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.692087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.692098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.692306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.692316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.692630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.692641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.692950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.692962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.693264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.693275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.693596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.693608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.693943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.693953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.694262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.694273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.694496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.694507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.694703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.694713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.695076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.695087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.695427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.695439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.695767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.695778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.255 qpair failed and we were unable to recover it. 00:38:17.255 [2024-06-07 14:40:40.696108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.255 [2024-06-07 14:40:40.696119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.696342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.696355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.696685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.696696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.696994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.697005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.697191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.697211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.697406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.697417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.697708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.697718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.698035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.698046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.698276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.698286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.698614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.698625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.698933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.698945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.699266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.699277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.699471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.699481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.699726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.699736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.700067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.700078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.700422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.700432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.700735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.700746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.701049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.701061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.701368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.701380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.701685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.701696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.702025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.702036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.702244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.702254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.702552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.702563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.702840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.702849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.703179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.703189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.703516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.703527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.703856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.703867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.704131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.704142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.704327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.704340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.704666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.704677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.705016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.705028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.705375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.705386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.705704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.705719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.706038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.706050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.706328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.706338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.706674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.706685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.707016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.707026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.707341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.707352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.256 [2024-06-07 14:40:40.707666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.256 [2024-06-07 14:40:40.707676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.256 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.708006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.708020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.708240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.708260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.708578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.708599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.708937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.708950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.709269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.709281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.709593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.709604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.709909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.709919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.710200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.710212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.710388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.710399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.710618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.710630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.710972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.710984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.711308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.711319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.711650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.711661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.711926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.711936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.712258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.712269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.712597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.712607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.712802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.712812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.713011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.713022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.713376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.713387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.713698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.713708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.714050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.714060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.714263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.714273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.714581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.714591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.714925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.714936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.715248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.715258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.715584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.715595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.715885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.715896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.716232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.716242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.716576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.716587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.716892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.716902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.717182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.717192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.717519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.717529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.717872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.717883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.718193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.718210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.718509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.718519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.718835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.718846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.719184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.719198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.719498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.719509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.257 qpair failed and we were unable to recover it. 00:38:17.257 [2024-06-07 14:40:40.719782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.257 [2024-06-07 14:40:40.719792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.720052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.720062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.720276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.720288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.720554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.720564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.720893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.720903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.721218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.721409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.721419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.721751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.721762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.722101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.722113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.722416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.722426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.722742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.722752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.723057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.723068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.723393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.723404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.723716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.723726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.724022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.724032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.724345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.724357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.724643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.724653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.724976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.724987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.725318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.725328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.725649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.725662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.725994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.726005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.726306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.726318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.726645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.726656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.726858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.726867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.727209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.727219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.727552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.727562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.727870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.727880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.728213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.728223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.728535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.728546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.728872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.728882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.729230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.729241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.729556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.729567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.729897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.258 [2024-06-07 14:40:40.729907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.258 qpair failed and we were unable to recover it. 00:38:17.258 [2024-06-07 14:40:40.730190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.730204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.730521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.730531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.730859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.730869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.731183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.731198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.731517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.731527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.731852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.731863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.732071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.732082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.732385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.732396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.732721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.732732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.733061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.733072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.733389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.733400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.733761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.733771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.733996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.734006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.734304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.734316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.734610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.734622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.734959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.734969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.735301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.735311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.735624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.735633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.735933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.735943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.736301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.736312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.736615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.736625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.736954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.736964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.737273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.737284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.737588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.737598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.737902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.737913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.738225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.738236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.738566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.738576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.738924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.738935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.739235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.739246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.739557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.739567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.739943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.739953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.740292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.740302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.740619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.740631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.740961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.740971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.741268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.741279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.741577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.741587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.741898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.741909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.742226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.742237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.259 [2024-06-07 14:40:40.742572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.259 [2024-06-07 14:40:40.742582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.259 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.742917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.742929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.743255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.743266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.743583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.743594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.743920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.743930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.744271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.744282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.744605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.744615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.744983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.744994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.745324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.745335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.745685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.745696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.746007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.746018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.746329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.746341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.746664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.746675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.747010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.747021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.747348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.747359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.747652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.747664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.747974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.747984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.748279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.748291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.748651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.748661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.748950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.748962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.749312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.749323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.749643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.749653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.749980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.749990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.750321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.750331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.750640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.750651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.750942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.750952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.751233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.751244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.751558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.751569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.751878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.751889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.752097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.752107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.752413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.752423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.752729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.752739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.753052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.753064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.753364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.753374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.753641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.753652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.753969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.753980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.754264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.754275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.754587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.754597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.754907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.754918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.260 qpair failed and we were unable to recover it. 00:38:17.260 [2024-06-07 14:40:40.755255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.260 [2024-06-07 14:40:40.755266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.755566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.755577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.755912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.755923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.756234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.756245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.756567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.756579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.756904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.756914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.757263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.757275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.757607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.757618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.757972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.757983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.758306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.758318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.758617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.758628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.758958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.758969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.759332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.759342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.759674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.759684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.760022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.760033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.760364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.760374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.760705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.760716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.761023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.761033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.761367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.761377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.761681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.761692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.762023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.762033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.762342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.762355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.762699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.762709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.762862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.762873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.763204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.763214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.763525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.763537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.763726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.763737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.764035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.764054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.764415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.764425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.764729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.764740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.765074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.765085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.765421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.765434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.765802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.765813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.766119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.766130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.766474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.766485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.766796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.766808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.767120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.767131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.767356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.767368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.767667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.261 [2024-06-07 14:40:40.767678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.261 qpair failed and we were unable to recover it. 00:38:17.261 [2024-06-07 14:40:40.768006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.768017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.768345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.768356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.768680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.768691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.768989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.768999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.769308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.769319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.769630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.769641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.769956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.769967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.770289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.770300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.770610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.770620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.770947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.770957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.771258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.771268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.771572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.771584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.771913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.771923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.772237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.772249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.772541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.772551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.772897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.772908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.773235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.773246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.773579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.773589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.773916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.773927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.774266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.774279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.774581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.774592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.774777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.774789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.775086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.775098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.775438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.775449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.775747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.775758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.776088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.776097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.776415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.776427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.776737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.776748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.777100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.777110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.777445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.777456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.777789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.777799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.778132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.778142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.778441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.778451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.778766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.778777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.779155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.779165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.779497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.779509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.779831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.779842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.780153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.780164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.262 [2024-06-07 14:40:40.780491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.262 [2024-06-07 14:40:40.780502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.262 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.780796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.780807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.781122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.781133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.781448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.781459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.781784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.781795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.782124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.782134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.782445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.782457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.782834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.782845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.783174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.783186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.783486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.783498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.783824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.783835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.784162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.784173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.784502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.784513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.784798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.784809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.785138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.785150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.785459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.785470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.785779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.785790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.786122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.786134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.786435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.786447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.786757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.786768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.787079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.787090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.787416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.787427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.787790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.787801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.788114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.788126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.788453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.788465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.788804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.788816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.789122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.789133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.789441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.789452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.789758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.789770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.790115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.790126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.790455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.790466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.790758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.790770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.791079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.263 [2024-06-07 14:40:40.791090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.263 qpair failed and we were unable to recover it. 00:38:17.263 [2024-06-07 14:40:40.791414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.791426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.791759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.791770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.792085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.792095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.792450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.792462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.792793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.792805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.793128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.793139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.793442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.793454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.793763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.793774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.794139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.794151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.794330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.794343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.794640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.794651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.794978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.794989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.795301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.795312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.795639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.795650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.796018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.796028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.796378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.796389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.796726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.796739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.797068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.797079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.797374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.797385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.797720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.797730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.798064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.798075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.798378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.798388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.798697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.798707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.799043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.799330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.799340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.799664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.799675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.799997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.800007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.800317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.800328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.800649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.800659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.800971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.800982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.801278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.801289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.801574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.801586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.801917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.801927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.802241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.802252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.802578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.802589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.802961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.802971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.803257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.803267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.803577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.803587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.264 [2024-06-07 14:40:40.803932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.264 [2024-06-07 14:40:40.803942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.264 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.804296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.804307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.804655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.804665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.805032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.805043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.805374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.805385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.805711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.805724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.805999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.806009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.806344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.806355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.806688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.806699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.807123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.807133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.807450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.807462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.807829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.807840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.808152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.808164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.808493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.808504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.808838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.808849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.809202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.809213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.809504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.809514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.809829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.809840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.810183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.810197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.810496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.810506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.810836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.810847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.811156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.811166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.811478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.811490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.811794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.811804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.812112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.812123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.812449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.812460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.812797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.812807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.813129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.813140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.813446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.813457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.813771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.813781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.814118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.814128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.814434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.814445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.814752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.814763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.815077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.815088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.815407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.815417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.815745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.815756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.816083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.816095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.816432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.816443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.816634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.816646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.265 qpair failed and we were unable to recover it. 00:38:17.265 [2024-06-07 14:40:40.816909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.265 [2024-06-07 14:40:40.816920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.817224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.817244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.817538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.817548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.817886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.817896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.818199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.818211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.818505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.818515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.818842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.818852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.819059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.819069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.819367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.819378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.819652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.819662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.819973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.819983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.820321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.820331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.820664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.820674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.820978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.820990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.821331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.821342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.821638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.821649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.821954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.821965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.822267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.822279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.822478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.822488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.822774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.822785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.823115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.823125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.823528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.823539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.823783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.823793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.824125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.824135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.824452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.824463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.824806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.824817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.825144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.825155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.825501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.825513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.825840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.825851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.826178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.826189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.826517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.826528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.826864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.826876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.827182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.827196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.827559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.827570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.827897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.827910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.828246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.828259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.828590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.828600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.828928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.828939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.829267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.266 [2024-06-07 14:40:40.829277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.266 qpair failed and we were unable to recover it. 00:38:17.266 [2024-06-07 14:40:40.829619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.829630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.829901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.829911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.830101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.830409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.830420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.830708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.830718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.831022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.831033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.831362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.831373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.831703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.831714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.832062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.832072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.832411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.832422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.832785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.832795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.833105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.833116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.833416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.833426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.833725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.833736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.833915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.833927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.834237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.834247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.834637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.834647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.834957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.834968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.835294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.835304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.835615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.835625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.835958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.835969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.836277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.836288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.836482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.836494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.836781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.836793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.836979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.836989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.837308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.837320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.837635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.837645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.837958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.837968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.838266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.838277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.838590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.838601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.838928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.838939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.839248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.839260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.839599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.839610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.839957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.839968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.840280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.840290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.840576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.840587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.840921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.267 [2024-06-07 14:40:40.840932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.267 qpair failed and we were unable to recover it. 00:38:17.267 [2024-06-07 14:40:40.841297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.841308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.841638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.841649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.841835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.841846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.842133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.842144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.842471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.842482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.842807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.842817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.843010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.843020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.843331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.843342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.843654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.843665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.843989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.843999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.844207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.844217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.844456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.844467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.844646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.844658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.844966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.844977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.845306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.845317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.845605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.845616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.845945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.845957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.846262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.846272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.846502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.846513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.846826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.846837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.847173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.847185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.847503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.847514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.847825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.847837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.848175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.848185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.848546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.848557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.848891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.848901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.849083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.849094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.849394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.849405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.849727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.849739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.850054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.850064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.850367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.850379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.850664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.850675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.851070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.851080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.851413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.851423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.851733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.268 [2024-06-07 14:40:40.851743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.268 qpair failed and we were unable to recover it. 00:38:17.268 [2024-06-07 14:40:40.852080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.852092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.852417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.852429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.852761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.852771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.853102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.853112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.853433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.853444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.853772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.853782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.854110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.854120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.854447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.854458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.854798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.854809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.855110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.855120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.855432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.855443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.855805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.855815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.856103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.856113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.856443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.856453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.856769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.856780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.857088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.857098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.857412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.857423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.857742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.857752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.858064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.858075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.858362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.858372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.858669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.858681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.859000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.859010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.859320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.859330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.859663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.859674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.859973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.859982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.860274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.860285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.860602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.860613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.860905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.860914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.861232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.861243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.861568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.861579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.861908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.861920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.862245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.862255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.862589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.862600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.862934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.862944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.863138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.863150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.863438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.863449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.863783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.863793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.864120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.864130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.269 [2024-06-07 14:40:40.864458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.269 [2024-06-07 14:40:40.864470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.269 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.864802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.864812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.865147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.865158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.865483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.865494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.865679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.865689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.866005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.866017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.866347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.866357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.866663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.866676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.866986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.866997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.867328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.867338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.867621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.867631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.867943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.867953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.868263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.868275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.868608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.868618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.868963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.868974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.869285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.869296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.869606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.869617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.869926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.869936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.870271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.870282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.870682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.870692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.871013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.871023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.871336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.871346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.871681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.871691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.872013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.872023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.872326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.872336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.872664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.872674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.873010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.873021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.873340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.873351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.873684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.873694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.874024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.874034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.874358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.874368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.874687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.874697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.875027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.875038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.875351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.875363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.270 [2024-06-07 14:40:40.875708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.270 [2024-06-07 14:40:40.875720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.270 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.876048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.876060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.876264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.876275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.876583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.876594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.876892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.876903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.877242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.877253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.877571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.877581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.877892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.877902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.878241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.878252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.878598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.878609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.878948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.878959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.879263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.879273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.879570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.879581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.879875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.879887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.880076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.880086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.880412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.880424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.880758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.880768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.881097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.881107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.881440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.881452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.881784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.881795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.882111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.882122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.882433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.882444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.882777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.882788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.883102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.883114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.883389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.883400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.883716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.550 [2024-06-07 14:40:40.883728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.550 qpair failed and we were unable to recover it. 00:38:17.550 [2024-06-07 14:40:40.883911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.883923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.884257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.884268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.884658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.884669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.884884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.884893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.885224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.885235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.885538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.885549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.885869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.885880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.886128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.886140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.886434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.886445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.886765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.886776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.887015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.887027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.887348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.887359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.887616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.887628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.887933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.887945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.888246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.888258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.888479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.888491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.888671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.888682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.889011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.889023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.889334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.889346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.889665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.889676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.890005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.890017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.890322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.890334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.890644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.890656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.890857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.890868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.891191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.891213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.891578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.891589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.891897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.891909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.892210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.892222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.892510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.892520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.892856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.892868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.893210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.893221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.893544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.893554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.893890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.893901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.894122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.894132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.894418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.894428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.894712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.894723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.895042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.895053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.895374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.551 [2024-06-07 14:40:40.895384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.551 qpair failed and we were unable to recover it. 00:38:17.551 [2024-06-07 14:40:40.895668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.895678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.895856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.895866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.896207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.896218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.896565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.896575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.896793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.896806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.897130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.897141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.897365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.897375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.897615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.897625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.897961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.897971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.898282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.898292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.898578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.898588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.898778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.898788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.899102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.899111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.899280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.899291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.899498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.899508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.899837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.899847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.900191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.900206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.900517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.900527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.900815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.900826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.901142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.901153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.901466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.901477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.901800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.901810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.902115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.902125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.902451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.902461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.902661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.902671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.902980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.902991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.903321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.903331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.903699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.903710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.904064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.904075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.904287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.904298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.904464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.904475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.904742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.904755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.905047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.905059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.905369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.905380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.905583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.905593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.905795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.905806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.906092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.906103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.906402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.906422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.552 qpair failed and we were unable to recover it. 00:38:17.552 [2024-06-07 14:40:40.906758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.552 [2024-06-07 14:40:40.906768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.906978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.906988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.907204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.907214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.907578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.907588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.907762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.907772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.907968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.907979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.908277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.908288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.908596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.908608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.908819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.908830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.909141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.909161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.909337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.909347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.909663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.909673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.909985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.909995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.910292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.910303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.910657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.910667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.910981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.910991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.911304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.911315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.911687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.911697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.911904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.911914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.912079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.912089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.912440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.912453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.912779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.912790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.912973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.912983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.913323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.913334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.913632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.913642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.913968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.913979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.914316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.914327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.914651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.914662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.915062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.915073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.915367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.915378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.915689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.915700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.916029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.916039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.916376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.916388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.916569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.916580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.916874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.916884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.917222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.917233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.917426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.917436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.917769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.917780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.918060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.553 [2024-06-07 14:40:40.918070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.553 qpair failed and we were unable to recover it. 00:38:17.553 [2024-06-07 14:40:40.918368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.918379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.918671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.918681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.918993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.919003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.919200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.919210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.919515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.919525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.919836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.919846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.920209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.920220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.920430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.920440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.920625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.920637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.920839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.920849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.921172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.921182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.921376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.921388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.921716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.921726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.922061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.922072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.922398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.922408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.922782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.922793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.923143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.923154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.923470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.923480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.923795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.923805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.923986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.923997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.924311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.924321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.924658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.924669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.925003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.925014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.925312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.925323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.925660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.925670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.926025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.926035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.926359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.926370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.926587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.926597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.926912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.926924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.927227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.927238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.927544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.554 [2024-06-07 14:40:40.927556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.554 qpair failed and we were unable to recover it. 00:38:17.554 [2024-06-07 14:40:40.927758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.927768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.927946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.927956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.928299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.928310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.928662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.928673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.928993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.929004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.929324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.929335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.929674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.929684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.929978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.929988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.930340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.930351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.930579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.930589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.930888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.930899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.931205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.931216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.931443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.931453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.931764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.931775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.931990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.932001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.932318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.932329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.932520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.932531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.932827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.932838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.933173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.933185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.933486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.933497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.933842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.933852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.934158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.934168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.934513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.934524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.934718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.935017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.935028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.935342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.935354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.935701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.935711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.936044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.936055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.936371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.936382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.936581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.936591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.936781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.936793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.937004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.937015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.937369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.937380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.937686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.937696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.938026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.938036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.938369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.938380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.938666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.938677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.939007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.939017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.555 qpair failed and we were unable to recover it. 00:38:17.555 [2024-06-07 14:40:40.939290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.555 [2024-06-07 14:40:40.939301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.939667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.939678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.939894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.939903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.940080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.940090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.940391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.940401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.940728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.940738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.940949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.940959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.941235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.941249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.941442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.941452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.941634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.941644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.941955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.941965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.942258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.942268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.942592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.942603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.942925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.942936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.943279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.943289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.943529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.943539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.943849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.943859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.944170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.944182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.944515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.944526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.944877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.944889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.945073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.945084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.945323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.945334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.945621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.945633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.945968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.945979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.946289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.946300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.946611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.946622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.946823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.946833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.947158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.947168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.947476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.947486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.947807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.947817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.948156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.948167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.948476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.948486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.948654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.948664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.948832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.948842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.949121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.949132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.949455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.949466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.949805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.949816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.949995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.950006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.556 [2024-06-07 14:40:40.950340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.556 [2024-06-07 14:40:40.950351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.556 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.950680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.950690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.951021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.951031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.951365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.951376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.951669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.951679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.951991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.952002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.952331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.952342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.952661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.952671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.953011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.953023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.953343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.953354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.953687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.953698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.954006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.954016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.954350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.954362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.954684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.954695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.955026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.955037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.955369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.955380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.955665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.955675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.956004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.956014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.956318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.956330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.956633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.956643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.956977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.956987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.957315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.957327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.957636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.957646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.957960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.957971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.958308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.958318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.958625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.958636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.958828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.958840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.959106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.959117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.959498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.959508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.959834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.959846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.960175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.960186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.960516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.960527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.960813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.960825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.961137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.961148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.961450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.961462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.961720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.961732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.962006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.962017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.962328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.962343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.962704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.962716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.557 qpair failed and we were unable to recover it. 00:38:17.557 [2024-06-07 14:40:40.963019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.557 [2024-06-07 14:40:40.963031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.963365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.963376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.963705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.963715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.963974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.963984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.964276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.964287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.964607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.964619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.964927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.964937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.965226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.965238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.965558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.965568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.965799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.965809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.966126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.966137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.966466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.966477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.966790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.966800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.967136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.967147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.967454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.967465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.967768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.967778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.968108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.968118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.968428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.968438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.968746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.968757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.969075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.969085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.969266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.969277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.969616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.969627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.969956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.969966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.970278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.970288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.970612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.970623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.970955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.970970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.971185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.971197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.971510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.971521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.971830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.971841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.972178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.972189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.972517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.972528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.972837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.972848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.973161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.973172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.973504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.973515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.973882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.973893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.974227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.974238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.974563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.974574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.974888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.974899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.975224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.558 [2024-06-07 14:40:40.975234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.558 qpair failed and we were unable to recover it. 00:38:17.558 [2024-06-07 14:40:40.975377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.975387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.975591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.975602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.975948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.975958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.976289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.976301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.976637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.976648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.976980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.976990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.977331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.977342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.977649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.977659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.977875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.977885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.978154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.978164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.978482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.978492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.978819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.978829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.979132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.979142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.979472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.979487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.979690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.979701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.980014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.980025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.980340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.980351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.980658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.980669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.981008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.981018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.981346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.981690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.981700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.982008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.982019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.982351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.982362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.982686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.982696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.983025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.983035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.983295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.983305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.983613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.983624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.983960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.983970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.984805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.984830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.985151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.985163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.985469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.985480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.985669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.559 [2024-06-07 14:40:40.985681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.559 qpair failed and we were unable to recover it. 00:38:17.559 [2024-06-07 14:40:40.985947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.985958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.986289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.986300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.986602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.986613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.986923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.986934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.987143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.987152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.987474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.987485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.987862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.987873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.988163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.988175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.988367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.988379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.988593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.988604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.988949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.988959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.989187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.989207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.989491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.989502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.989855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.989865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.990172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.990184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.990510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.990521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.990865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.990875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.991185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.991208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.991514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.991525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.991851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.991862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.992164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.992174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.992512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.992523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.992853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.992864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.993171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.993181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.993427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.993438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.993638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.993648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.993943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.993952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.994281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.994292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.994591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.994600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.994945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.994955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.995290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.995301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.995631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.995642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.995972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.995983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.996311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.996322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.996737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.996748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.997052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.997063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.997400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.997411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.997757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.997767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.998100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.560 [2024-06-07 14:40:40.998110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.560 qpair failed and we were unable to recover it. 00:38:17.560 [2024-06-07 14:40:40.998439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:40.998450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:40.998756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:40.998766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:40.999108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:40.999118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:40.999498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:40.999509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:40.999825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:40.999836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.000169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.000179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.000496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.000506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.000846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.000857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.001200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.001211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.001503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.001513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.001831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.001844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.002151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.002162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.002489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.002500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.002905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.002916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.003220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.003231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.003449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.003460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.003808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.003818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.004129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.004141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.004471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.004482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.004828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.004838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.005145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.005165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.005530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.005540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.005846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.005857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.006192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.006206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.006391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.006401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.006726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.006736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.007095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.007105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.007302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.007313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.007503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.007513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.007787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.007797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.008098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.008110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.008315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.008325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.008639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.008649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.008961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.008972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.009313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.009323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.009619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.009630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.009957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.009967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.010303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.010316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.561 [2024-06-07 14:40:41.010642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.561 [2024-06-07 14:40:41.010654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.561 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.010930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.010940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.011269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.011280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.011586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.011596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.011964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.011975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.012310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.012320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.012516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.012527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.012769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.012780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.013101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.013111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.013297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.013308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.013640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.013651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.013981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.013991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.014321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.014331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.014621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.014632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.015002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.015013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.015331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.015342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.015544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.015555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.015879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.015889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.016200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.016210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.016538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.016548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.016849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.016860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.017046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.017058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.017365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.017376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.017692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.017703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.018033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.018044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.018432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.018442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.018746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.018757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.019085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.019095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.019381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.019391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.019718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.019728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.020096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.020106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.020286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.020295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.020462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.020473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.020760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.020770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.021115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.021125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.021321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.021331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.021655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.021666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.022002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.022013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.022326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.022337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.562 qpair failed and we were unable to recover it. 00:38:17.562 [2024-06-07 14:40:41.022500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.562 [2024-06-07 14:40:41.022510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.022801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.022811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.023143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.023154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.023340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.023351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.023631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.023641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.023951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.023961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.024144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.024155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.024465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.024475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.024803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.024814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.025123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.025135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.025341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.025353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.025687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.025698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.025998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.026009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.026338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.026349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.026577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.026587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.026860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.026871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.027183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.027193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.027503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.027513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.027821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.027832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.028141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.028151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.028500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.028511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.028842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.028852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.028901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.028910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.029206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.029216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.029544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.029555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.029742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.029751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.030066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.030076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.030408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.030418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.030748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.030761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.031089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.031100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.031325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.031335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.031679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.031689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.031865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.031875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.032187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.032202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.032573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.032584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.032930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.032940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.033252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.033264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.033576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.033586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.563 qpair failed and we were unable to recover it. 00:38:17.563 [2024-06-07 14:40:41.033915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.563 [2024-06-07 14:40:41.033925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.034249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.034260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.034429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.034439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.034767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.034778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.034965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.034976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.035299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.035310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.035635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.035646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.035951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.035962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.036145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.036156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.036467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.036477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.036788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.036798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.037109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.037120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.037424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.037435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.037768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.037779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.038115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.038126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.038448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.038458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.038771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.038782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.038970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.038983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.039313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.039324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.039508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.039518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.039805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.039815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.040145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.040155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.040470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.040480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.040792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.040802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.041121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.041132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.041446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.041457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.041788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.041799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.042163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.042174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.042466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.042478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.042792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.042803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.043133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.043144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.043475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.564 [2024-06-07 14:40:41.043485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.564 qpair failed and we were unable to recover it. 00:38:17.564 [2024-06-07 14:40:41.043788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.043798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.044131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.044142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.044455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.044465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.044782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.044792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.045122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.045132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.045454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.045465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.045832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.045842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.046172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.046183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.046496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.046507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.046870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.046881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.047185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.047203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.047524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.047535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.047863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.047875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.048211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.048229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.048426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.048436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.048712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.048723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.049055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.049065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.049404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.049414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.049735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.049747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.050076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.050085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.050412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.050423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.050759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.050769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.050975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.050984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.051260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.051271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.051559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.051570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.051858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.051868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.052229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.052240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.052588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.052598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.052907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.052918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.053251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.053262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.053574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.053585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.053899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.053910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.054265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.054276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.054610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.054620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.054858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.054868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.055169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.055180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.055495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.055506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.055841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.565 [2024-06-07 14:40:41.055851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.565 qpair failed and we were unable to recover it. 00:38:17.565 [2024-06-07 14:40:41.056177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.056188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.056512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.056522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.056749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.056758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.057019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.057030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.057371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.057382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.057692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.057702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.058032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.058043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.058234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.058245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.058453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.058464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.058774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.058784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.059094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.059105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.059382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.059392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.059702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.059713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.060039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.060049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.060362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.060374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.060667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.060678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.061006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.061017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.061328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.061339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.061649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.061661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.062030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.062040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.062365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.062376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.062707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.062718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.063078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.063088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.063292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.063302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.063633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.063644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.063958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.063968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.064144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.064154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.064460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.064470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.064802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.064813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.065132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.065143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.065475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.065486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.065804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.065815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.066032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.066042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.066274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.066284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.066589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.066599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.066935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.066945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.067303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.067313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.067595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.067605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.566 [2024-06-07 14:40:41.067934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.566 [2024-06-07 14:40:41.067945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.566 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.068291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.068302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.068622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.068632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.069005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.069015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.069274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.069286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.069591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.069601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.069914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.069925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.070250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.070260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.070549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.070561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.070882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.070892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.071226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.071236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.071574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.071585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.071916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.071926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.072266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.072276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.072612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.072623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.072931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.072942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.073248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.073259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.073579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.073590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.073918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.073929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.074256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.074266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.074580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.074590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.074920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.074931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.075140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.075149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.075426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.075437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.075747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.075757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.076092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.076103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.076449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.076459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.076751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.076762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.077061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.077072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.077389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.077400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.077710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.077721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.078067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.078079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.078394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.078413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.078708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.078719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.079026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.079037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.079347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.079357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.079666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.079677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.079998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.080008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.567 [2024-06-07 14:40:41.080329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.567 [2024-06-07 14:40:41.080340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.567 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.080675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.080685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.080986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.080998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.081314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.081325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.081678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.081689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.081996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.082006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.082311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.082322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.082608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.082620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.082944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.082953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.083219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.083230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.083513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.083523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.083815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.083825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.084160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.084171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.084511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.084522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.084856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.084867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.085202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.085213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.085531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.085542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.085910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.085920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.086233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.086244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.086571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.086581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.086787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.086796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.087080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.087092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.087310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.087321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.087655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.087666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.087998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.088010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.088331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.088343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.088661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.088672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.088988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.088999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.089335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.089345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.089649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.089659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.089978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.089988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.090310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.090322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.568 [2024-06-07 14:40:41.090529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.568 [2024-06-07 14:40:41.090540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.568 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.090888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.090899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.091213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.091226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.091540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.091550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.091879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.091891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.092208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.092219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.092503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.092515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.092850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.092860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.093190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.093212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.093526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.093537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.093843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.093854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.094070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.094081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.094418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.094429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.094757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.094767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.095159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.095169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.095485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.095496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.095716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.095726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.096028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.096039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.096352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.096362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.096683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.096694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.097021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.097033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.097339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.097350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.097569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.097579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.097900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.097910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.098228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.098238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.098424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.098435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.098739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.098750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.099076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.099086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.099331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.099341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.099712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.099726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.100089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.100099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.100483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.100495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.100714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.100724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.101046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.101056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.101359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.101369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.101702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.101712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.102037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.102047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.102390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.102400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.102587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.569 [2024-06-07 14:40:41.102599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.569 qpair failed and we were unable to recover it. 00:38:17.569 [2024-06-07 14:40:41.102912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.102923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.103268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.103280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.103607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.103618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.103924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.103935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.104226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.104237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.104441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.104451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.104649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.104659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.104926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.104936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.105274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.105284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.105490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.105500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.105787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.105798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.106128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.106139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.106493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.106503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.106631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.106641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.106930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.106941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.107090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.107100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.107405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.107416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.107732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.107748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.108083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.108093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.108325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.108336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.108586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.108596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.108910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.108920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.109227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.109239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.109570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.109580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.109907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.109918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.110228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.110238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.110551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.110561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.110889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.110900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.111231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.111241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.111611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.111622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.111928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.111938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.112268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.112279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.112640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.112650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.112961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.112971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.113317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.113328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.113645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.113655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.113984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.113994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.114233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.570 [2024-06-07 14:40:41.114244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.570 qpair failed and we were unable to recover it. 00:38:17.570 [2024-06-07 14:40:41.114426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.114437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.114761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.114772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.115074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.115084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.115499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.115510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.115839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.115850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.116177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.116189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.116401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.116413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.116616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.116625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.116929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.116940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.117224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.117236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.117559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.117570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.117878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.117889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.118080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.118091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.118427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.118437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.118737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.118747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.119058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.119069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.119401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.119412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.119741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.119752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.120071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.120081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.120425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.120436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.120643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.120653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.120970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.120980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.121174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.121183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.121417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.121428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.121734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.121744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.122073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.122084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.122270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.122282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.122580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.122591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.122915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.122925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.123208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.123218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.123420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.123430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.123741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.123752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.124088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.124099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.124416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.124427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.124718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.124730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.125048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.125059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.125400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.125411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.125687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.125697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.571 qpair failed and we were unable to recover it. 00:38:17.571 [2024-06-07 14:40:41.126006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.571 [2024-06-07 14:40:41.126017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.126338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.126348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.126700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.126710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.127019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.127029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.127363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.127375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.127701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.127712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.128045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.128055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.128374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.128385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.128741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.128751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.129061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.129072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.129504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.129514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.129822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.129833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.130125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.130135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.130443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.130455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.130795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.130805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.131117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.131128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.131438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.131449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.131754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.131765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.132104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.132115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.132417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.132429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.132739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.132750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.133085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.133096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.133265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.133278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.133592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.133604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.133917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.133928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.134233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.134244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.134580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.134590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.134768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.134779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.135135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.135145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.135472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.135483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.135812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.135823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.136134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.136144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.136436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.136447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.136821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.136831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.137161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.137172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.137499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.137510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.137838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.137850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.138181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.138192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.138531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.572 [2024-06-07 14:40:41.138541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.572 qpair failed and we were unable to recover it. 00:38:17.572 [2024-06-07 14:40:41.138735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.138744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.139076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.139086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.139375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.139385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.139595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.139605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.139917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.139927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.140202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.140213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.140513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.140523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.140878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.140887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.141205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.141215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.141566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.141576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.141961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.141972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.142308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.142319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.142643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.142653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.142863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.142873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.143208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.143219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.143576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.143587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.143922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.143932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.144152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.144162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.144372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.144383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.144694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.144705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.145038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.145048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.145274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.145284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.145578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.145589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.145878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.145888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.146200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.146213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.146523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.146534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.146861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.146872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.147051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.147061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.147365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.147376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.147710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.147721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.148032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.148043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.148369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.148380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.573 [2024-06-07 14:40:41.148702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.573 [2024-06-07 14:40:41.148721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.573 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.149045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.149055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.149259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.149269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.149626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.149637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.149821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.149833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.150137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.150149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.150479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.150491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.150830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.150841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.151026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.151036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.151223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.151233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.151532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.151542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.151881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.151892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.152223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.152234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.152547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.152558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.152871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.152882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.153221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.153232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.153519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.153529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.153821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.153832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.154000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.154011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.154359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.154370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.154465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.154474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.154751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.154761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.155056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.155067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.155371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.155382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.155674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.155684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.156016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.156026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.156370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.156381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.156732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.156743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.157051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.157062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.157415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.157427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.157726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.157737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.158050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.158060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.158371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.158383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.158706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.158718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.158983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.158993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.159262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.159272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.159631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.159641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.159837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.159846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.160010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.160020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.574 qpair failed and we were unable to recover it. 00:38:17.574 [2024-06-07 14:40:41.160359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.574 [2024-06-07 14:40:41.160370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.160584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.160594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.160776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.160787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.161121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.161132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.161419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.161432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.161715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.161726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.161904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.161915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.162200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.162210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.162524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.162534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.162826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.162836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.163059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.163070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.163332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.163342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.163665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.163676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.163984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.163995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.164330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.164340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.164722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.164733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.165067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.165078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.165369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.165380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.165688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.165699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.166020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.166030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.166348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.166359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.166696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.166709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.167056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.167067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.167418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.167431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.167632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.167643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.167907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.167917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.168290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.168301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.168619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.168630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.168934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.168945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.169285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.169295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.169668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.169679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.169943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.169953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.170255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.170266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.170554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.170564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.170898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.170909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.171101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.171114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.171446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.171457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.171770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.171782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.171938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.575 [2024-06-07 14:40:41.171949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.575 qpair failed and we were unable to recover it. 00:38:17.575 [2024-06-07 14:40:41.172237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.172248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.172440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.172450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.172753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.172764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.173079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.173089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.173278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.173289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.173557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.173568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.173884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.173895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.174226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.174237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.174537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.174549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.174877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.174889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.175257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.175268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.175569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.175579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.175893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.175904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.176083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.176092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.176417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.176428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.176736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.176745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.177050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.177062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.177376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.177387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.177567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.177577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.177954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.177964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.178311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.178323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.178521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.178532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.178819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.178830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.576 [2024-06-07 14:40:41.179165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.576 [2024-06-07 14:40:41.179176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.576 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.179582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.179595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.180015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.180027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.180346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.180357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.180635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.180645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.180963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.180974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.181156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.181166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.181492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.181502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.181867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.181879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.182081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.182091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.182215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.182223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.182460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.182470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.182713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.182723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.183042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.183054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.183379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.183390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.183688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.183699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.853 qpair failed and we were unable to recover it. 00:38:17.853 [2024-06-07 14:40:41.184030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.853 [2024-06-07 14:40:41.184041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.184365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.184376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.184591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.184600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.184949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.184960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.185290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.185300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.185632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.185642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.185949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.185960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.186320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.186332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.186636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.186647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.186976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.186987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.187317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.187329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.187674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.187686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.187991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.188002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.188356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.188367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.188680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.188691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.188994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.189005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.189310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.189321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.189522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.189533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.189858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.189869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.190210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.190222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.190531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.190542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.190875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.190885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.191216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.191227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.191535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.191546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.191850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.191860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.192175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.192186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.192491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.192502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.192800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.192810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.193107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.193118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.193421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.193432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.193741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.193752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.194055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.194066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.194379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.194389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.194596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.194606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.194920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.194930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.195209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.195219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.195530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.195540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.195913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.195924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.854 [2024-06-07 14:40:41.196229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.854 [2024-06-07 14:40:41.196241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.854 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.196550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.196561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.196953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.196963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.197266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.197277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.197514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.197524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.197859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.197870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.198180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.198191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.198504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.198516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.198815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.198826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.199163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.199174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.199379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.199390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.199608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.199618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.199865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.199876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.200173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.200184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.200493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.200505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.200818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.200829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.201187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.201203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.201536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.201546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.201860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.201871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.202066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.202078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.202393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.202404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.202689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.202699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.203002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.203014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.203344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.203354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.203543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.203554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.203867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.203878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.204189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.204202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.204516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.204529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.204851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.204862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.205200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.205211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.205501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.205511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.205822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.205833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.206145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.206155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.206495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.206505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.206729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.206739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.207070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.207080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.855 [2024-06-07 14:40:41.207397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.855 [2024-06-07 14:40:41.207409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.855 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.207724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.207734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.208048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.208059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.208318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.208328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.208650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.208660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.209001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.209012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.209320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.209331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.209642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.209653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.209969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.210276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.210286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.210564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.210575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.210905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.210915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.211063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.211073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.211365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.211376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.211684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.211695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.211893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.211903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.212224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.212234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.212605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.212616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.212920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.212932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.213263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.213274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.213612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.213622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.213987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.213997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.214303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.214314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.214641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.214651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.214949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.214959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.215294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.215305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.215632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.215642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.215944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.215955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.216266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.216276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.216614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.216625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.216927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.216937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.217267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.217277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.217595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.217606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.217822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.217833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.218099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.218110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.218298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.218311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.218631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.218643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.218973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.218984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.856 qpair failed and we were unable to recover it. 00:38:17.856 [2024-06-07 14:40:41.219333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.856 [2024-06-07 14:40:41.219344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.219657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.219667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.219976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.219987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.220363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.220374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.220678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.220688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.221008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.221018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.221319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.221329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.221631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.221641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.221971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.221982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.222285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.222297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.222629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.222640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.222971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.222982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.223328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.223340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.223672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.223682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.224028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.224039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.224281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.224292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.224555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.224565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.224780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.224791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.225039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.225049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.225383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.225394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.225721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.225731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.225919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.225930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.226229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.226240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.226568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.226578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.226909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.226920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.227228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.227239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.227547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.227558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.227744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.227755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.228094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.228106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.228445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.228456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.228678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.228688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.228850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.228861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.229144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.229154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.229334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.229344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.229662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.229672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.230009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.230020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.230360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.230371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.230595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.230606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.857 [2024-06-07 14:40:41.230931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.857 [2024-06-07 14:40:41.230941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.857 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.231270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.231281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.231454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.231464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.231799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.231810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.232138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.232149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.232485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.232496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.232826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.232837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.233173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.233184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.233436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.233447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.233745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.233755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.234073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.234088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.234316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.234328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.234521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.234532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.234817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.234829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.235174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.235185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.235415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.235426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.235741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.235753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.235941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.235952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.236283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.236293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.236619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.236631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.236945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.236955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.237292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.237303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.237648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.237658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.237949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.237961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.238278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.238289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.238627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.238637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.238970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.238981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.239323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.239335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.239619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.239629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.239758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.239768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.240096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.240106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.858 qpair failed and we were unable to recover it. 00:38:17.858 [2024-06-07 14:40:41.240440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.858 [2024-06-07 14:40:41.240451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.240756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.240767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.241109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.241120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.241310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.241322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.241658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.241670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.242056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.242067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.242364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.242377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.242706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.242716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.243060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.243071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.243377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.243388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.243690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.243701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.244033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.244044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.244347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.244357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.244679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.244689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.245025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.245035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.245349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.245360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.245692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.245702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.246011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.246022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.246303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.246314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.246644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.246654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.246982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.246994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.247319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.247330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.247667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.247677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.248005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.248015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.248377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.248388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.248701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.248712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.248891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.248901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.249101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.249111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.249428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.249438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.249808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.249819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.250150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.250161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.250490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.250501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.250836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.250847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.251157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.251170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.251503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.251515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.251823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.251834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.252164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.252175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.252488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.252500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.252874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.859 [2024-06-07 14:40:41.252885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.859 qpair failed and we were unable to recover it. 00:38:17.859 [2024-06-07 14:40:41.253186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.253201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.253563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.253574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.253884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.253894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.254232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.254244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.254431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.254443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.254730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.254740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.255049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.255059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.255371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.255684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.255695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.256024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.256035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.256370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.256380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.256714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.256725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.257051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.257062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.257377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.257388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.257718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.257729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.258076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.258086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.258423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.258435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.258762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.258772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.259091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.259102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.259409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.259419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.259580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.259590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.259915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.259926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.260253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.260264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.260605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.260616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.260909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.260920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.261101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.261113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.261465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.261477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.261813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.261824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.262144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.262155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.262492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.262504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.262813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.262824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.263164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.263175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.263507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.263519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.263827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.263839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.264162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.264173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.264519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.264533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.264857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.264869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.265065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.265077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.860 [2024-06-07 14:40:41.265411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.860 [2024-06-07 14:40:41.265423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.860 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.265778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.265790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.266120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.266132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.266442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.266452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.266764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.266776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.267152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.267164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.267385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.267395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.267712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.267724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.268037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.268048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.268383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.268394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.268721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.268731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.269059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.269070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.269372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.269382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.269549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.269561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.269830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.269840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.270173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.270183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.270523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.270534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.270721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.270731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.271060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.271070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.271376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.271387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.271698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.271710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.272046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.272057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.272373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.272384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.272702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.273033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.273046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.273376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.273387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.273720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.273730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.274011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.274022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.274305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.274316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.274650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.274662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.274952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.274963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.275278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.275288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.275616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.275626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.275970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.275980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.276288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.276299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.276603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.276614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.276939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.276950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.277254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.277265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.277591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.277601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.861 qpair failed and we were unable to recover it. 00:38:17.861 [2024-06-07 14:40:41.277903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.861 [2024-06-07 14:40:41.277914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.278242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.278253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.278592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.278603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.278916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.278927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.279233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.279244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.279432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.279443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.279804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.279814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.280122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.280133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.280452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.280463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.280791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.280802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.281132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.281143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.281479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.281489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.281858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.281872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.282197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.282208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.282541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.282551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.282860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.282870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.283209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.283220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.283537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.283548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.283834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.283844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.284184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.284198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.284534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.284546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.284857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.284868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.285091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.285101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.285419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.285430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.285830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.285841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.286143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.286153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.286491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.286502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.286831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.286841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.287142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.287152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.287527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.287538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.287850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.287862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.288175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.288184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.288515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.288526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.862 qpair failed and we were unable to recover it. 00:38:17.862 [2024-06-07 14:40:41.288731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.862 [2024-06-07 14:40:41.288742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.288982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.288993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.289317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.289328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.289630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.289641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.289939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.289950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.290153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.290163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.290455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.290466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.290796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.290806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.291100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.291110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.291409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.291420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.291729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.291740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.292110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.292120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.292445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.292457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.292791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.292802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.293128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.293139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.293505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.293516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.293826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.293836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.294177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.294188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.294562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.294573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.294881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.294892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.295223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.295234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.295569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.295579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.295924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.295935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.296238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.296249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.296553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.296564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.296907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.296916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.297224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.297235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.297527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.297538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.297937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.297947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.298247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.298258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.298618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.298628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.298965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.298977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.299301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.299312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.299601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.299612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.299940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.299950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.300264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.300283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.300495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.300505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.300794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.300804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.301113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.863 [2024-06-07 14:40:41.301124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.863 qpair failed and we were unable to recover it. 00:38:17.863 [2024-06-07 14:40:41.301430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.301442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.301767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.301778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.302113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.302123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.302343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.302353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.302667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.302677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.302979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.302990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.303295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.303306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.303633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.303643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.303974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.303986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.304313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.304324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.304637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.304647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.305006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.305017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.305327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.305338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.305651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.305661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.305994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.306004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.306333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.306344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.306678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.306688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.306998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.307009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.307349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.307359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.307708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.307720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.308040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.308050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.308357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.308367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.308695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.308705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.309017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.309027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.309357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.309368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.309674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.309684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.310023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.310034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.310337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.310348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.310687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.310698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.311010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.311021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.311326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.311336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.311649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.311660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.311994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.312005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.312231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.312241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.312553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.312564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.312893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.312905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.313208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.313219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.313437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.313447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.864 qpair failed and we were unable to recover it. 00:38:17.864 [2024-06-07 14:40:41.313752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.864 [2024-06-07 14:40:41.313763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.314063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.314073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.314400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.314411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.314715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.314724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.315028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.315038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.315351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.315363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.315700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.315710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.315915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.315925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.316212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.316222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.316534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.316545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.316855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.316865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.317198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.317209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.317547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.317557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.317863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.317875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.318184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.318199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.318509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.318520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.318857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.318868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.319185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.319205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.319528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.319539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.319865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.319876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.320122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.320132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.320345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.320356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.320685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.320695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.321003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.321013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.321357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.321370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.321675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.321686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.322048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.322058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.322233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.322244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.322544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.322554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.322862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.322873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.323201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.323212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.323512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.323524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.323861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.323871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.324183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.324198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.324501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.324512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.324841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.324852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.325189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.325205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.325520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.325530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.325863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.865 [2024-06-07 14:40:41.325874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.865 qpair failed and we were unable to recover it. 00:38:17.865 [2024-06-07 14:40:41.326201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.326212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.326545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.326555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.326885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.326896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.327225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.327236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.327564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.327575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.327909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.327919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.328252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.328264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.328578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.328588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.328897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.328908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.329209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.329220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.329527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.329538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.329846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.329856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.330186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.330200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.330576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.330587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.330887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.330899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.331227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.331239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.331568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.331578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.331915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.331925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.332296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.332308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.332635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.332646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.332957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.332968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.333268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.333279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.333567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.333577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.333785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.333795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.334120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.334129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.334441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.334452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.334778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.334790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.335121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.335132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.335436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.335448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.335821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.335831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.336132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.336144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.336461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.336471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.336779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.336790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.337169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.337179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.337386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.337398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.337731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.337742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.338058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.338069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.338401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.338411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.338723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.866 [2024-06-07 14:40:41.338734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.866 qpair failed and we were unable to recover it. 00:38:17.866 [2024-06-07 14:40:41.339069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.339079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.339425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.339437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.339771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.339782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.340109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.340120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.340451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.340462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.340791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.340802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.341135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.341145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.341456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.341467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.341799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.341809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.342183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.342197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.342517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.342528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.342839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.342850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.343162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.343173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.343533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.343544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.343845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.343858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.344184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.344202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.344564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.344575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.344881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.344892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.345227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.345238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.345553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.345564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.345888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.345898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.346234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.346245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.346551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.346561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.346887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.346897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.347207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.347218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.347549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.347559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.347901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.347912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.348225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.348236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.348534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.348545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.348871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.348882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.349226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.349237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.349549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.349559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.867 [2024-06-07 14:40:41.349886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.867 [2024-06-07 14:40:41.349897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.867 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.350210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.350220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.350552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.350563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.350871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.350882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.351043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.351054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.351349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.351361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.351758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.351768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.352079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.352090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.352424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.352435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.352765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.352777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.352981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.352992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.353304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.353315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.353686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.353695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.354027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.354037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.354374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.354386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.354590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.354600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.354910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.354921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.355247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.355258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.355593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.355603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.355917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.355927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.356240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.356251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.356580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.356591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.356933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.356943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.357258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.357270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.357459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.357471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.357781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.357793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.358132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.358143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.358443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.358455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.358778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.358788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.359115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.359127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.359435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.359446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.359770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.359781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.360112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.360123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.360437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.360448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.360790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.360801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.361117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.361128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.361461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.361472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.361788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.361799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.362148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.362159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.868 qpair failed and we were unable to recover it. 00:38:17.868 [2024-06-07 14:40:41.362469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.868 [2024-06-07 14:40:41.362481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.362808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.362819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.363147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.363158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.363500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.363512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.363880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.363892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.364220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.364232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.364555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.364566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.364902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.364912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.365237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.365247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.365621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.365631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.365886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.365896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.366204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.366215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.366565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.366575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.366884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.366894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.367232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.367243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.367547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.367558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.367887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.367897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.368207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.368218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.368559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.368569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.368904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.368915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.369242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.369255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.369580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.369590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.369917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.369927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.370212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.370222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.370533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.370544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.370880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.370892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.371109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.371121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.371307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.371318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.371647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.371659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.371984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.371995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.372311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.372323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.372653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.372664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.372993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.373004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.373338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.373348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.373521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.373531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.373837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.373847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.374175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.374185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.374475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.374485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.374829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.869 [2024-06-07 14:40:41.374841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.869 qpair failed and we were unable to recover it. 00:38:17.869 [2024-06-07 14:40:41.375030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.375041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.375338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.375349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.375682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.375693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.376002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.376013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.376349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.376361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.376686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.376697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.377024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.377035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.377352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.377363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.377693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.377704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.378030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.378041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.378369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.378380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.378710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.378721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.379058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.379068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.379377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.379389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.379700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.379711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.380012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.380023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.380205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.380217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.380533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.380544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.380870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.380880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.381188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.381202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.381473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.381484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.381812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.381824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.382008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.382019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.382189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.382204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.382614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.382624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.382976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.382987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.383309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.383324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.383638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.383648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.383891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.383902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.384210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.384221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.384534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.384544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.384851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.384861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.385165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.385175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.385513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.385523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.385835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.385846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.386212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.386222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.386543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.386554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.386881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.386891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.870 [2024-06-07 14:40:41.387223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.870 [2024-06-07 14:40:41.387235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.870 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.387525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.387535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.387830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.387842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.388176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.388186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.388494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.388505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.388767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.388777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.388996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.389006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.389319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.389330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.389649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.389660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.389991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.390001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.390291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.390301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.390604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.390615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.390834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.390844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.391158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.391168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.391495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.391506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.391741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.391751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.392110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.392120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.392431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.392443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.392778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.392788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.393119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.393130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.393417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.393427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.393758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.393768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.394056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.394066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.394300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.394310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.394657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.394667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.395017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.395028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.395335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.395345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.395683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.395694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.396039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.396050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.396360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.396371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.396706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.396717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.397047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.397058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.397395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.397405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.397714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.397725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.398073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.398084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.398400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.871 [2024-06-07 14:40:41.398411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.871 qpair failed and we were unable to recover it. 00:38:17.871 [2024-06-07 14:40:41.398728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.398739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.399068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.399078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.399261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.399272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.399589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.399600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.399880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.399890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.400223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.400234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.400564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.400574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.400887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.400898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.401202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.401214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.401539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.401549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.401834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.401846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.402138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.402148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.402466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.402477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.402794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.402804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.403142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.403153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.403483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.403493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.403805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.403816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.404147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.404157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.404495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.404507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.404823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.404833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.405164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.405177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.405489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.405500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.405836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.405846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.406131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.406141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.406453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.406464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.406674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.406684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.407026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.407036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.407227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.407238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.407626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.407637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.407952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.407963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.408256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.408267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.408571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.408581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.408915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.408925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.409126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.409135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.409420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.409431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.872 qpair failed and we were unable to recover it. 00:38:17.872 [2024-06-07 14:40:41.409798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.872 [2024-06-07 14:40:41.409808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.410002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.410011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.410547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.410637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.410897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.410932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.411267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.411301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.411681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.411711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.411947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.411959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.412149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.412161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.412470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.412481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.412795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.412806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.413141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.413151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.413466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.413478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.413679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.413692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.413896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.413906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.414046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.414056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.414369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.414380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.414680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.414691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.415024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.415035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.415376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.415387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.415700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.415710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.416017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.416027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.416323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.416334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.416651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.416662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.416991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.417001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.417339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.417350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.417657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.417667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.418004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.418015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.418314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.418325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.418650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.418661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.418988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.418999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.419336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.419347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.419568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.419578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.419899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.419910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.420215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.420227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.420519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.420529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.420879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.420889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.421200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.421210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.421520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.421530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.421794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.873 [2024-06-07 14:40:41.421804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.873 qpair failed and we were unable to recover it. 00:38:17.873 [2024-06-07 14:40:41.422129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.422141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.422441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.422453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.422764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.422775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.423119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.423129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.423513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.423526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.423864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.423874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.424188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.424204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.424589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.424600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.424927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.424937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.425253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.425264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.425623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.425633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.426010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.426020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.426329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.426340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.426518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.426528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.426817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.426827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.427042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.427052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.427304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.427314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.427477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.427489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.427689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.427699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.428037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.428046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.428363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.428375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.428676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.428686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.428896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.428906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.429068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.429078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.429301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.429311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.429633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.429644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.429978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.429989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.430318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.430328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.430602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.430613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.430960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.430971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.431159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.431169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.431355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.431365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.431702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.431713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.432054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.432065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.432425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.432436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.432752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.432763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.874 [2024-06-07 14:40:41.432939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.874 [2024-06-07 14:40:41.432949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.874 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.433118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.433128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.433484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.433496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.433817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.433827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.434073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.434082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.434417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.434428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.434680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.434691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.434981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.434992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.435306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.435318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.435605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.435616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.436006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.436017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.436283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.436294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.436484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.436495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.436788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.436799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.437114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.437124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.437288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.437299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.437541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.437551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.437859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.437870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.438170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.438181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.438519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.438530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.438856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.438868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.439210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.439222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.439556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.439566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.439901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.439911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.440095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.440105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.440332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.440342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.440656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.440667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.440986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.440995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.441336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.441346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.441661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.441671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.441969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.441979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.442340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.442351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.442608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.442620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.442930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.442941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.443119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.443129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.443462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.443473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.443786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.875 [2024-06-07 14:40:41.443795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.875 qpair failed and we were unable to recover it. 00:38:17.875 [2024-06-07 14:40:41.444107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.444118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.444339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.444350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.444684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.444694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.445009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.445020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.445342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.445354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.445678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.445688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.445996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.446007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.446338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.446348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.446656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.446668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.447002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.447013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.447341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.447352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.447662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.447674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.448017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.448027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.448336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.448347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.448686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.448696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.449013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.449024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.449293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.449303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.449594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.449604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.449932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.449942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.450264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.450275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.450586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.450597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.450901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.450912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.451230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.451243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.451554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.451564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.451873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.451883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.452063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.452073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.452394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.452404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.452711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.452722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.453024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.453035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.453361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.453373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.453712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.453722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.454052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.454063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.876 [2024-06-07 14:40:41.454395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.876 [2024-06-07 14:40:41.454405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.876 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.454615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.454625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.454956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.454966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.455295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.455307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.455620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.455630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.455989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.456000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.456310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.456329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.456629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.456640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.456940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.456951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.457212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.457222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.457533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.457544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.457854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.457864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.458201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.458212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.458558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.458568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.458881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.458891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.459120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.459130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.459440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.459450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.459779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.459792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.460094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.460104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.460424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.460434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.460767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.460778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.461107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.461118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.461453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.461464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.461792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.461803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.461991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.462002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.462294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.462305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.462608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.462619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.462851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.462861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.463158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.463168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.463446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.463456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.463785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.463795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.464109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.464119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.464403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.464413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.464773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.464784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.465154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.465164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.465467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.877 [2024-06-07 14:40:41.465478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.877 qpair failed and we were unable to recover it. 00:38:17.877 [2024-06-07 14:40:41.465812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.465823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.466076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.466086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.466342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.466352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.466634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.466643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.466821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.466834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.467167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.467178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.467494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.467504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.467695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.467706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.467990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.468001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.468226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.468237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.468550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.468560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.468894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.468905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.469238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.469250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.469461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.469470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.469782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.469792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.469995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.470005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.470302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.470313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.470629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.470640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.470955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.470965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.471358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.471369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.471665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.471675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.471984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.471995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.472283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.472294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.472622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.472633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.472966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.472976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.473283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.473294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.473609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.473619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.473824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.473834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.474140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.474150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.474476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.474487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.474826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.474836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.475138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.475148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.475484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.475495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.475825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.475836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.476165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.476176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.476556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.476567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.476872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.476882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.477208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.477220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.878 [2024-06-07 14:40:41.477519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.878 [2024-06-07 14:40:41.477531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.878 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.477859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.477870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.478093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.478104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.478338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.478349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.478652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.478663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.478954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.478965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.479287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.479298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.479607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.479618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.479943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.479953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.480282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.480294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.480641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.480652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.480978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.480991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.481321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.481331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.481660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.481671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.482014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.482024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.482337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.482349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.482682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.482693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.483035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.483046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.483387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.483397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.483725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.483735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:17.879 [2024-06-07 14:40:41.484053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.879 [2024-06-07 14:40:41.484064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:17.879 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.484360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.484372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.484706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.484718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.484926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.484937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.485168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.485179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.485476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.485487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.485804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.485816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.486203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.486215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.486443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.486453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.486708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.486718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.487054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.487064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.487369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.487379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.487731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.487742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.487964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.487973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.488278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.488289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.488624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.488634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.488940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.488950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.489269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.489280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.489580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.489593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.489905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.489915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.490249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.490261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.490438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.490449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.490762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.490774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.491084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.491094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.491406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.491416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.491745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.491756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.492048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.492058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.492365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.492376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.492687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.492698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.492999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.155 [2024-06-07 14:40:41.493009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.155 qpair failed and we were unable to recover it. 00:38:18.155 [2024-06-07 14:40:41.493231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.493241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.493467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.493478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.493779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.493790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.494125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.494135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.494339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.494350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.494557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.494567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.494880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.494891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.495228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.495240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.495555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.495566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.495939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.495949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.496257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.496269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.496616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.496626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.496908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.496919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.497174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.497185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.497490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.497501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.497807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.497818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.498158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.498169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.498484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.498496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.498822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.498833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.499159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.499170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.499506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.499518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.499826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.499836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.500160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.500170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.500500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.500511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.500845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.500855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.501154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.501164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.501517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.501528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.501833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.501843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.502177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.502187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.502568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.502579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.502834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.502844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.503018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.503030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.503253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.503264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.503575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.503585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.503889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.503899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.504228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.504241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.504558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.504568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.504876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.504887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.156 [2024-06-07 14:40:41.505178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.156 [2024-06-07 14:40:41.505188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.156 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.505505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.505516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.505850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.505860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.506227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.506237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.506564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.506575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.506895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.506905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.507236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.507247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.507644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.507654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.507998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.508008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.508318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.508329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.508667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.508678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.508974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.508984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.509338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.509349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.509660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.509671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.510006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.510015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.510345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.510355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.510669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.510680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.510982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.510993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.511326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.511338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.511663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.511674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.512002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.512012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.512292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.512302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.512672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.512682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.512902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.512912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.513248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.513259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.513573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.513584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.513767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.513778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.514002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.514013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.514304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.514315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.514634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.514645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.514988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.514998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.515308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.515320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.515657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.515667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.515971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.515982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.516325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.516336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.516647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.516658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.516973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.516983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.517296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.517306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.517680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.517690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.157 qpair failed and we were unable to recover it. 00:38:18.157 [2024-06-07 14:40:41.517991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.157 [2024-06-07 14:40:41.518001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.518304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.518315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.518530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.518540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.518841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.518852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.519181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.519191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.519559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.519569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.519878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.519891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.520232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.520243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.520464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.520475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.520785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.520794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.521125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.521136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.521478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.521489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.521816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.521827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.522172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.522182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.522474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.522485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.522807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.522817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.523146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.523157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.523521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.523531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.523846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.523858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.524204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.524215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.524545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.524556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.524896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.524907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.525217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.525228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.525547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.525557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.525866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.525877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.526206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.526217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.526522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.526533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.526868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.526878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.527181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.527192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.527380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.527392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.527677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.527688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.528021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.528032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.528394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.528405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.528716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.528728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.528921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.158 [2024-06-07 14:40:41.528933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.158 qpair failed and we were unable to recover it. 00:38:18.158 [2024-06-07 14:40:41.529237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.529249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.529461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.529470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.529679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.529689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.529946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.529956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.530293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.530305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.530632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.530643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.531015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.531025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.531377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.531388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.531723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.531734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.532044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.532054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.532401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.532413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.532618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.532629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.532805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.532816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.533137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.533147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.533443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.533455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.533768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.533779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.534073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.534083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.534382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.534393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.534704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.534715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.535044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.535054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.535239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.535250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.535574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.535584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.535893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.535903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.536187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.536202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.536489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.536499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.536836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.536846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.537168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.537179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.537381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.537392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.537681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.537692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.538005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.538015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.538319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.538331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.159 qpair failed and we were unable to recover it. 00:38:18.159 [2024-06-07 14:40:41.538663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.159 [2024-06-07 14:40:41.538673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.539009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.539020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.539345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.539355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.539668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.539678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.539862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.539872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.540171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.540182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.540501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.540512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.540838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.540849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.541153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.541163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.541495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.541506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.541811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.541821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.542128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.542139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.542457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.542468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.542807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.542818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.543125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.543136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.543451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.543462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.543829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.543840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.544138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.544149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.544475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.544486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.544814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.544826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.545152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.545164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.545461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.545473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.545806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.545818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.546003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.546016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.546301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.546312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.546648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.546658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.546968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.546978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.547306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.547318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.547646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.547656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.547993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.548004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.548332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.548342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.548650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.548660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.548977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.548987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.549330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.549340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.549668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.549679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.550009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.550022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.160 qpair failed and we were unable to recover it. 00:38:18.160 [2024-06-07 14:40:41.550346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.160 [2024-06-07 14:40:41.550357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.550721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.550732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.551034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.551044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.551363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.551374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.551639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.551649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.551941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.551951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.552280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.552291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.552631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.552641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.552949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.552959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.553284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.553294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.553598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.553609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.553937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.553948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.554250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.554261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.554570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.554580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.554909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.554920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.555249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.555260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.555622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.555632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.555960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.555970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.556300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.556311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.556622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.556633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.556959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.556970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.557307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.557319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.557644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.557654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.557985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.557996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.558325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.558336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.558668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.558679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.559009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.559024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.559356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.559367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.559552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.559563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.559773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.559783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.560112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.560122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.560450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.560461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.560647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.560658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.560955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.560965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.561345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.561357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.561674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.561684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.561981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.561991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.562328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.562339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.562671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.161 [2024-06-07 14:40:41.562682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.161 qpair failed and we were unable to recover it. 00:38:18.161 [2024-06-07 14:40:41.562995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.563006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.563192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.563207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.563524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.563534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.563842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.563853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.564184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.564199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.564568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.564579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.564891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.564901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.565232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.565243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.565593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.565603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.565906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.565917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.566249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.566260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.566572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.566583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.566885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.566896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.567222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.567233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.567565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.567577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.567906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.567917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.568221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.568232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.568595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.568606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.568944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.568955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.569284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.569295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.569606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.569617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.569922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.569932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.570273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.570284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.570607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.570618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.570960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.570971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.571286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.571297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.571639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.571649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.571857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.571868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.572172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.572183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.572481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.572493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.572828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.572838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.573127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.573138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.573455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.573466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.573825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.573836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.574171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.162 [2024-06-07 14:40:41.574181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.162 qpair failed and we were unable to recover it. 00:38:18.162 [2024-06-07 14:40:41.574540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.574551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.574865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.574876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.575180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.575190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.575528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.575539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.575849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.575860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.576170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.576181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.576526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.576537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.576877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.576887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.577228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.577241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.577552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.577563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.577863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.577874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.578207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.578217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.578518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.578529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.578829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.578839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.579161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.579172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.579537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.579548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.579850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.579861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.580181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.580192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.580497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.580509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.580797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.580807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.581117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.581129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.581330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.581341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.581675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.581686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.581986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.581996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.582333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.582344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.582675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.582685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.583015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.583026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.583363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.583373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.583722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.583733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.584057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.584067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.584409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.584420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.584753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.584763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.585107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.585117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.585444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.585456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.585784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.585795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.586099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.586109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.586426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.586437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.586743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.163 [2024-06-07 14:40:41.586754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.163 qpair failed and we were unable to recover it. 00:38:18.163 [2024-06-07 14:40:41.587068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.587079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.587407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.587418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.587746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.587757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.588067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.588078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.588372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.588383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.588676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.588686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.588993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.589004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.589314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.589325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.589642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.589654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.589987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.589999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.590311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.590322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.590652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.590662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.590972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.590982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.591317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.591328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.591656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.591667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.591998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.592008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.592302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.592313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.592648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.592658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.592984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.592995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.593323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.593334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.593654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.593666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.594004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.594015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.594349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.594361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.594707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.594717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.594934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.594944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.595250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.595261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.595566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.595576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.595914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.595924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.596239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.164 [2024-06-07 14:40:41.596250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.164 qpair failed and we were unable to recover it. 00:38:18.164 [2024-06-07 14:40:41.596581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.596591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.596902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.596912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.597242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.597254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.597562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.597574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.597911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.597921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.598248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.598259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.598564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.598575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.598862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.598874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.599189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.599210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.599509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.599519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.599856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.599867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.600218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.600229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.600569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.600581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.600883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.600893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.601211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.601221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.601548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.601559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.601935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.601946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.602248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.602258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.602586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.602596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.602902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.602913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.603250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.603261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.603595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.603606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.603873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.603884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.604086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.604096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.604390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.604401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.604705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.604715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.604978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.604988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.605318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.605330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.605672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.605682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.606009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.606020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.606332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.606343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.606656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.606667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.606992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.607002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.607340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.607351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.607591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.607602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.607913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.607923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.608258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.608269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.165 [2024-06-07 14:40:41.608595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.165 [2024-06-07 14:40:41.608605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.165 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.608978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.608990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.609298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.609309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.609618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.609628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.609937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.609947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.610277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.610289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.610619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.610629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.610920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.610930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.611242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.611253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.611583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.611594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.611908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.611919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.612268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.612280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.612594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.612604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.612941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.612951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.613172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.613182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.613478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.613488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.613791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.613802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.614113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.614123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.614454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.614465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.614766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.614777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.615095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.615106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.615434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.615445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.615753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.615764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.616100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.616111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.616432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.616443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.616752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.616763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.617110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.617122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.617310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.617323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.617647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.617658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.617923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.617934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.618260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.618271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.618560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.618570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.618895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.618905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.619217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.619227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.619551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.619561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.619900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.619910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.620218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.620228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.620598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.620609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.166 [2024-06-07 14:40:41.620914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.166 [2024-06-07 14:40:41.620927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.166 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.621151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.621162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.621490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.621501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.621844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.621855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.622240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.622251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.622556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.622566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.622760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.622772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.623078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.623089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.623447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.623457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.623794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.623806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.624119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.624130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.624455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.624465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.624748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.624758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.625078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.625089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.625507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.625519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.625835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.625846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.626151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.626162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.626503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.626515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.626824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.626836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.627165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.627176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.627502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.627513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.627856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.627867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.628177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.628189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.628500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.628512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.628840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.628851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.629187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.629205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.629507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.629519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.629740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.629754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.630079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.630089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.630395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.630405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.630721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.630732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.631054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.631065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.631371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.631382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.631664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.631674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.631995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.632005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.632307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.632317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.632622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.632633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.632922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.632932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.167 qpair failed and we were unable to recover it. 00:38:18.167 [2024-06-07 14:40:41.633238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.167 [2024-06-07 14:40:41.633248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.633575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.633587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.633901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.633911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.634249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.634260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.634574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.634584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.634897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.634907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.635220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.635231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.635556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.635567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.635878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.635888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.636207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.636219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.636530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.636541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.636877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.636888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.637113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.637123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.637437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.637448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.637779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.637789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.638098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.638109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.638408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.638418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.638749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.638760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.638946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.638959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.639278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.639288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.639648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.639658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.639971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.639983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.640315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.640326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.640639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.640651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.640842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.640854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.641156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.641166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.641562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.641574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.641886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.641897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.642079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.642091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.642402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.642413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.642726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.642737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.643077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.643088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.643437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.643448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.643756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.643767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.644099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.644110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.644329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.644339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.644650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.644660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.645027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.645037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.168 [2024-06-07 14:40:41.645352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.168 [2024-06-07 14:40:41.645363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.168 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.645673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.645683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.646008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.646019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.646360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.646371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.646676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.646687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.647022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.647032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.647370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.647382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.647716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.647727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.648076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.648086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.648392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.648402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.648721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.648732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.649034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.649046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.649366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.649376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.649712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.649724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.649931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.649941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.650215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.650225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.650598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.650609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.650928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.650938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.651245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.651255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.651455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.651469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.651672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.651683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.651867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.651877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.652202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.652214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.652504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.652514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.652884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.652895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.653212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.653223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.653550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.653561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.653874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.653885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.654190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.654206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.654539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.654550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.654856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.654866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.655182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.169 [2024-06-07 14:40:41.655198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.169 qpair failed and we were unable to recover it. 00:38:18.169 [2024-06-07 14:40:41.655501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.655512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.655837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.655848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.656203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.656215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.656529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.656540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.656853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.656864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.657074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.657085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.657319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.657329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.657530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.657543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.657859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.657870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.658209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.658221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.658594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.658604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.658787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.658797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.659126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.659137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.659446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.659456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.659763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.659776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.660080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.660091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.660462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.660473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.660799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.660809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.661115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.661127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.661435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.661446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.661665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.661674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.662015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.662025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.662242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.662252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.662425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.662436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.662721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.662731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.663038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.663049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.663355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.663366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.663679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.663691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.663899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.663909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.664069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.664079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.664364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.664375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.664681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.664692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.664941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.664952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.665273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.665284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.665478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.665488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.665842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.665852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.666202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.666213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.666533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.666543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.170 [2024-06-07 14:40:41.666855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.170 [2024-06-07 14:40:41.666867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.170 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.667236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.667246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.667527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.667537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.667837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.667850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.668178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.668189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.668525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.668536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.668869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.668879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.669221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.669232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.669555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.669566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.669896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.669907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.670238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.670248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.670450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.670460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.670792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.670802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.671124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.671135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.671542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.671553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.671825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.671835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.672148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.672159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.672466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.672477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.672801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.672812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.673185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.673201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.673475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.673486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.673812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.673823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.674153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.674163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.674479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.674489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.674669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.674679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.674887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.674897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.675215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.675225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.675547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.675558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.675760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.675772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.676087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.676099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.676279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.676289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.676602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.676613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.676943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.676954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.677261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.677271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.677483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.677493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.677826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.677837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.678156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.678167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.678564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.678575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.678884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.171 [2024-06-07 14:40:41.678894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.171 qpair failed and we were unable to recover it. 00:38:18.171 [2024-06-07 14:40:41.679191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.679205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.679569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.679579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.679777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.679787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.680116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.680127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.680433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.680444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.680773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.680784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.680986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.680996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.681286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.681296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.681635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.681645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.681956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.681967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.682326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.682337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.682659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.682670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.682948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.682959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.683278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.683288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.683605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.683615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.683927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.683938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.684281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.684292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.684601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.684613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.684832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.684844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.685096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.685106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.685416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.685427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.685759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.685769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.686117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.686127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.686442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.686453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.686759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.686770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.686963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.686972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.687146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.687156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.687449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.687460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.687637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.687648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.687845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.687857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.688178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.688188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.688523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.688533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.688877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.688891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.689187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.689202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.689503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.689513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.689841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.689852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.690192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.690208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.690526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.690536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.690866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.172 [2024-06-07 14:40:41.690876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.172 qpair failed and we were unable to recover it. 00:38:18.172 [2024-06-07 14:40:41.691193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.691208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.691516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.691527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.691837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.691848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.692151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.692162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.692496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.692506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.692844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.692855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.693039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.693049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.693372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.693383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.693714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.693724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.694012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.694022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.694429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.694440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.694778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.694789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.695177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.695186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.695578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.695588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.695901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.695911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.696223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.696234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.696541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.696551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.696796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.696806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.697115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.697125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.697415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.697425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.697635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.697648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.697955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.697966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.698295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.698306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.698617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.698628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.698953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.698963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.699298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.699309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.699633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.699644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.699938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.699948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.700283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.700294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.700589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.700599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.700772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.700783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.701074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.701084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.701465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.701476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.701786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.701796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.702102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.702113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.702446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.173 [2024-06-07 14:40:41.702458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.173 qpair failed and we were unable to recover it. 00:38:18.173 [2024-06-07 14:40:41.702769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.702779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.703114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.703124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.703432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.703443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.703755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.703766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.704085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.704097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.704415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.704426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.704714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.704724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.705042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.705053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.705360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.705370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.705703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.705713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.706114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.706127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.706421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.706434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.706802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.706813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.707153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.707165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.707479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.707491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.707812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.707823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.708113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.708124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.708439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.708452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.708777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.708788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.709097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.709109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.709446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.709457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.709788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.709800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.710129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.710141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.710361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.710373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.710677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.710689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.711018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.711029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.711339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.711349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.711660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.711671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.711998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.712008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.712349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.712361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.712686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.712696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.712911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.712921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.713231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.713242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.713573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.713583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.713885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.713896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.174 qpair failed and we were unable to recover it. 00:38:18.174 [2024-06-07 14:40:41.714225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.174 [2024-06-07 14:40:41.714236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.714575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.714586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.714919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.714929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.715253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.715264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.715540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.715551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.715868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.715879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.716112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.716122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.716433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.716445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.716773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.716784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.717111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.717122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.717433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.717444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.717770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.717781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.718094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.718105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.718290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.718302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.718612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.718623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.718951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.718962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.719140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.719153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.719482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.719496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.719831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.719843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.720167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.720178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.720506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.720518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.720845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.720857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.721219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.721520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.721531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.721858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.721868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.722201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.722212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.722513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.722525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.722851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.722862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.723193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.723207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.723525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.723536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.723822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.723832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.724164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.724175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.724498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.724509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.724887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.724897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.725245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.725256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.725575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.725585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.725900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.725910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.726214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.726225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.726584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.175 [2024-06-07 14:40:41.726595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.175 qpair failed and we were unable to recover it. 00:38:18.175 [2024-06-07 14:40:41.726908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.726919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.727246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.727257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.727459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.727469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.727780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.727791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.728119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.728129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.728483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.728496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.728887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.728897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.729237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.729248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.729571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.729582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.729866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.729877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.730094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.730104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.730407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.730417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.730734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.730745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.731043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.731055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.731372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.731382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.731689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.731698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.732053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.732064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.732376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.732386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.732739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.732750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.733116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.733126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.733412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.733422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.733737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.733747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.733970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.733981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.734270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.734281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.734592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.734602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.734908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.734919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.735245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.735255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.735563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.735572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.735878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.735888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.736251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.736261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.736596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.736607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.736938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.736948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.737250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.737264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.737571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.737581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.737924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.737934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.738279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.738290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.738591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.738602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.738776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.738785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.739099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.176 [2024-06-07 14:40:41.739109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.176 qpair failed and we were unable to recover it. 00:38:18.176 [2024-06-07 14:40:41.739436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.739447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.739761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.739771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.739980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.739990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.740170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.740180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.740501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.740512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.740833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.740843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.741172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.741182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.741506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.741518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.741849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.741859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.742165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.742176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.742377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.742388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.742663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.742674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.742980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.742991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.743260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.743271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.743484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.743494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.743665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.743676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.743973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.743984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.744320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.744332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.744667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.744678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.744991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.745002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.745300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.745311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.745616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.745626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.745852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.745862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.746177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.746188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.746491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.746502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.746839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.746849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.747025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.747034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.747327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.747339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.747643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.747653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.747978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.747989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.748257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.748267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.748590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.748600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.748940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.748950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.749269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.749280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.749457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.749469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.749807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.749818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.750151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.750164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.750475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.750485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.177 [2024-06-07 14:40:41.750814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.177 [2024-06-07 14:40:41.750824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.177 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.751155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.751166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.751480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.751492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.751801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.751812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.752157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.752168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.752496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.752508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.752845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.752856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.753186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.753201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.753552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.753563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.753895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.753906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.754264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.754274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.754571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.754582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.754909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.754920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.755229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.755240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.755628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.755639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.755929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.755939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.756269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.756280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.756472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.756482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.756768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.756779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.757110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.757121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.757436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.757447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.757805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.757816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.758160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.758170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.758484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.758497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.758874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.758884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.759189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.759205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.759501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.759512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.759781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.759791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.760109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.760120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.760406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.760416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.760590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.760601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.760927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.760938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.178 [2024-06-07 14:40:41.761253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.178 [2024-06-07 14:40:41.761263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.178 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.761637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.761648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.761940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.761950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.762310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.762321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.762540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.762550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.762848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.762858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.763134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.763145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.763458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.763469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.763774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.763785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.764103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.764113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.764342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.764353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.764651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.764661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.764807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.764818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.765140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.765150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.765489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.765500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.765782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.765792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.766124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.766134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.766455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.766467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.766790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.766803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.767110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.767424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.767435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.767749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.767761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.768088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.768099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.768483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.768494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.768832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.768843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.769093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.769104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.769383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.769395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.769708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.769720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.769930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.769941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.770267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.770277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.770608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.770619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.770953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.770963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.771268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.771280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.771594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.771604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.771794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.771805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.772112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.772122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.772441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.772451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.772775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.772785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.773090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.179 [2024-06-07 14:40:41.773101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.179 qpair failed and we were unable to recover it. 00:38:18.179 [2024-06-07 14:40:41.773421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.773432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.773741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.773753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.774084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.774095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.774300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.774310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.774539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.774551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.774870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.774881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.775237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.775247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.775535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.775545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.775823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.775833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.776173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.776183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.776387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.776398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.776714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.776725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.777069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.777080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.777258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.777269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.777605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.777615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.777830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.777840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.778155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.778165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.778495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.778506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.778823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.778833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.779144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.779155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.779392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.779403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.779687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.779697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.780010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.780020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.780293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.780304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.780650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.780660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.780989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.781000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.781206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.781216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.781771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.781861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.782357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.782396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.782767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.782798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.783019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.783032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.783390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.783429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.783777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.783790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.784073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.784083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.784401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.784413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.784729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.784740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.785078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.785088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.785399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.785413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.180 qpair failed and we were unable to recover it. 00:38:18.180 [2024-06-07 14:40:41.785743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.180 [2024-06-07 14:40:41.785754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.181 [2024-06-07 14:40:41.786084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.181 [2024-06-07 14:40:41.786095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.181 [2024-06-07 14:40:41.786390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.181 [2024-06-07 14:40:41.786401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.181 [2024-06-07 14:40:41.786748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.181 [2024-06-07 14:40:41.786759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.181 [2024-06-07 14:40:41.787074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.181 [2024-06-07 14:40:41.787084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.181 [2024-06-07 14:40:41.787486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.181 [2024-06-07 14:40:41.787497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.181 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.787718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.787729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.457 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.788037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.788048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.457 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.788310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.788321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.457 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.788640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.788654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.457 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.788959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.788969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.457 qpair failed and we were unable to recover it. 00:38:18.457 [2024-06-07 14:40:41.789260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.457 [2024-06-07 14:40:41.789272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.789565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.789575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.789936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.789946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.790286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.790297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.790610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.790621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.790967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.790978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.791296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.791307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.791607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.791618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.791949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.791960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.792290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.792302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.792604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.792614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.792945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.792956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.793235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.793246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.793467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.793478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.793812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.793823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.794185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.794201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.794513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.794523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.794713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.794725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.795041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.795052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.795378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.795390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.795728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.795738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.796037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.796048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.796384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.796396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.796702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.796714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.796920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.796932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.797275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.797288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.797603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.797613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.797795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.797807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.798141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.798151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.798445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.798456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.798768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.798778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.799116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.799126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.799447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.799457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.799584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.799596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.799900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.799911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.800220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.800232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.800548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.800559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.458 [2024-06-07 14:40:41.800891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.458 [2024-06-07 14:40:41.800901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.458 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.801226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.801237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.801477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.801488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.801772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.801782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.802121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.802131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.802471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.802481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.802823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.802834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.803151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.803162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.803451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.803462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.803792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.803804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.804146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.804156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.804477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.804488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.804799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.804810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.805113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.805124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.805430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.805442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.805770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.805783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.806113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.806124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.806455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.806467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.806783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.806795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.807114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.807125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.807471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.807483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.807796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.807807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.808139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.808150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.808479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.808490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.808823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.808835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.809168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.809180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.809523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.809535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.809904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.809916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.810251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.810262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.810581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.810591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.810929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.810941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.811267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.811278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.811444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.811455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.811743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.811754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.812067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.812078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.812412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.812422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.812730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.812741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.813061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.813072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.459 [2024-06-07 14:40:41.813376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.459 [2024-06-07 14:40:41.813387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.459 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.813772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.813782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.814095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.814105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.814296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.814308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.814619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.814630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.814738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.814749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.815069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.815079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.815309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.815319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.815662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.815672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.815977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.815989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.816324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.816335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.816627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.816638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.816959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.816969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.817270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.817281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.817599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.817609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.817938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.817949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.818293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.818304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.818606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.818618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.818932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.818942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.819274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.819285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.819579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.819590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.819902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.819912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.820244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.820255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.820570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.820580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.820885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.820896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.821255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.821266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.821570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.821580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.821915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.821926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.822136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.822147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.822348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.822360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.822684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.822694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.823024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.823035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.823347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.823358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.823646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.823657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.823971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.823982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.824308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.824319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.824611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.824622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.824993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.825006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.460 qpair failed and we were unable to recover it. 00:38:18.460 [2024-06-07 14:40:41.825335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.460 [2024-06-07 14:40:41.825348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.825677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.825688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.826017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.826028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.826368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.826381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.826724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.826735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.827080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.827091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.827400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.827412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.827729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.827742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.827950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.827961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.828231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.828242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.828556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.828566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.828896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.828907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.829288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.829298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.829576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.829586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.829921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.829932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.830238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.830249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.830560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.830570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.830876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.830887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.831201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.831212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.831519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.831530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.831859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.831869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.832181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.832192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.832607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.832617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.832810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.832821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.833156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.833167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.833399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.833410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.833706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.833717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.834087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.834098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.834421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.834432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.834762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.834772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.835094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.835105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.835451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.835462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.835773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.835784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.836058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.836069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.836383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.836395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.836701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.836712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.837104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.837115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.837409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.837420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.461 [2024-06-07 14:40:41.837737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.461 [2024-06-07 14:40:41.837747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.461 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.838047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.838058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.838377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.838387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.838736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.838746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.839083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.839094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.839433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.839444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.839776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.839787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.840118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.840129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.840268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.840278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.840593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.840603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.840822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.840832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.841042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.841053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.841392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.841404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.841735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.841746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.842080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.842090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.842486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.842497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.842831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.842842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.843120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.843131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.843450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.843461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.843768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.843779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.844091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.844102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.844450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.844461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.844830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.844841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.845191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.845209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.845507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.845517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.845700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.845710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.845978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.845988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.846193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.846207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.846591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.846601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.846870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.846880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.847209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.462 [2024-06-07 14:40:41.847221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.462 qpair failed and we were unable to recover it. 00:38:18.462 [2024-06-07 14:40:41.847447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.847457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.847657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.847668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.848008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.848019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.848361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.848372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.848707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.848717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.849012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.849023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.849314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.849325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.849634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.849645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.849841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.849853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.850174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.850183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.850374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.850385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.850737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.850746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.851133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.851143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.851447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.851458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.851779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.851790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.852219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.852231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.852519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.852529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.852715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.852726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.853061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.853071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.853359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.853369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.853666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.853676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.853867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.853876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.854186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.854200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.854402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.854412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.854726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.854737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.855083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.855094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.855411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.855422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.855616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.855626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.855938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.855949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.856286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.856296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.856625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.856635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.856955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.856966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.857310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.857321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.857612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.857625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.857836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.857846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.858202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.858212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.858497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.858507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.858742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.858752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.463 qpair failed and we were unable to recover it. 00:38:18.463 [2024-06-07 14:40:41.859038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.463 [2024-06-07 14:40:41.859049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.859325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.859336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.859637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.859648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.859965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.859975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.860184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.860199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.860527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.860537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.860857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.860867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.861178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.861188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.861512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.861523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.861861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.861871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.862189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.862203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.862527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.862538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.862887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.862897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.863232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.863243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.863555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.863565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.863769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.863779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.864105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.864116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.864422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.864433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.864750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.864761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.865098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.865109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.865439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.865451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.865780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.865791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.865972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.865987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.866349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.866360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.866648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.866657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.866969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.866980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.867313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.867323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.867702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.867714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.868032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.868043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.868348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.868359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.868694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.868705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.869051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.869062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.869375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.869387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.869694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.869705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.869851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.869863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.870376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.870466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.870885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.870922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.871463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.871550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.464 [2024-06-07 14:40:41.871958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.464 [2024-06-07 14:40:41.871993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.464 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.872494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.872532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.872906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.872919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.873420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.873458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.873769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.873783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.874129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.874140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.874474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.874486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.874798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.874809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.875023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.875033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.875368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.875379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.875677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.875687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.875926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.875941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.876271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.876281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.876616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.876627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.876979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.876989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.877318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.877329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.877693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.877704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.877926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.877936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.878220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.878232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 821841 Killed "${NVMF_APP[@]}" "$@" 00:38:18.465 [2024-06-07 14:40:41.878436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.878448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.878801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.878811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:18.465 [2024-06-07 14:40:41.879148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.879159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.879457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.879469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:18.465 [2024-06-07 14:40:41.879642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.879653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:18.465 [2024-06-07 14:40:41.879876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.879888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:18.465 [2024-06-07 14:40:41.880206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.880220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:18.465 [2024-06-07 14:40:41.880579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.880591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.880957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.880969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.881281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.881292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.881609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.881621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.881946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.881957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.882273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.882284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.882503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.882513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.882879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.882890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.883244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.883255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.883633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.883644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.883861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.465 [2024-06-07 14:40:41.883873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.465 qpair failed and we were unable to recover it. 00:38:18.465 [2024-06-07 14:40:41.884188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.884203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.884511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.884522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.884784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.884797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.884992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.885003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.885372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.885384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.885709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.885720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.886044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.886055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.886352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.886364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.886674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.886686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.886857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.886868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.887246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.887259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.887608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.887619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=822654 00:38:18.466 [2024-06-07 14:40:41.887952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.887965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 822654 00:38:18.466 [2024-06-07 14:40:41.888261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.888273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 822654 ']' 00:38:18.466 [2024-06-07 14:40:41.888536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.888549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.466 [2024-06-07 14:40:41.888861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.888873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.888986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.888999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:18.466 [2024-06-07 14:40:41.889502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.889589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 14:40:41 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:18.466 [2024-06-07 14:40:41.890033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.890069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.890420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.890510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.890813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.890827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.890935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.890946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.891255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.891267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.891466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.891478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.891835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.891846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.892064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.892076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.892391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.892403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.892585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.892597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.892787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.892800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.893123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.893135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.893562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.893574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.893878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.893889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.894109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.894120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.894317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.466 [2024-06-07 14:40:41.894329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.466 qpair failed and we were unable to recover it. 00:38:18.466 [2024-06-07 14:40:41.894660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.894671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.894899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.894910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.895259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.895270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.895641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.895651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.895979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.895989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.896360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.896371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.896671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.896683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.896850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.896861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.897087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.897099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.897486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.897496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.897706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.897716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.898013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.898024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.898220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.898230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.898500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.898511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.898843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.898853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.899064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.899075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.899393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.899406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.899718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.899729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.900052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.900063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.900287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.900299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.900583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.900593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.900869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.900880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.901201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.901212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.901586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.901597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.901900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.901910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.902144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.902154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.902511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.902522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.902816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.902828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.903138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.903150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.903453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.903464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.903658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.903667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.904006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.904016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.904350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.904360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.467 qpair failed and we were unable to recover it. 00:38:18.467 [2024-06-07 14:40:41.904677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.467 [2024-06-07 14:40:41.904687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.905025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.905036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.905359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.905369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.905697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.905709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.905903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.905914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.906113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.906124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.906366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.906378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.906600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.906610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.906822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.906832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.907062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.907074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.907267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.907278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.907530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.907541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.907847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.907858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.908190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.908207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.908314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.908325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.908658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.908669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.908952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.908964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.909193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.909217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.909509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.909520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.909838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.909849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.910039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.910050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.910150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.910158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.910512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.910526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.910743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.910754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.910952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.910962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.911271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.911283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.911601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.911612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.911915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.911926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.912023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.912032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.912310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.912321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.912537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.912547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.912893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.912904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.913135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.913145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.913463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.913475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.913695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.913707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.913891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.913904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.914183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.914199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.914391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.914402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.914592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.468 [2024-06-07 14:40:41.914603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.468 qpair failed and we were unable to recover it. 00:38:18.468 [2024-06-07 14:40:41.914894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.914905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.915254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.915266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.915472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.915483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.915787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.915797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.916177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.916187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.916496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.916508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.916827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.916838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.917154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.917166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.917546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.917558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.917863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.917874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.918015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.918029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.918350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.918361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.918675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.918686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.919021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.919032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.919255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.919265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.919673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.919683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.919901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.919911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.920232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.920243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.920640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.920651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.920972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.920982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.921304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.921316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.921597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.921608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.921924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.921935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.922265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.922276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.922637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.922648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.922974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.922984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.923192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.923213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.923521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.923532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.923883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.923894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.924089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.924099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.924305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.924316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.924538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.924548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.924882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.924892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.925210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.925221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.925532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.925544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.925856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.925866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.926157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.926168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.926343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.469 [2024-06-07 14:40:41.926353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.469 qpair failed and we were unable to recover it. 00:38:18.469 [2024-06-07 14:40:41.926626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.926637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.926854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.926865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.927228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.927239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.927611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.927622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.927922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.927933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.928125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.928136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.928414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.928425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.928745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.928755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.928838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.928847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.929156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.929168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.929514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.929524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.929837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.929849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.930158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.930169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.930488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.930500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.930700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.930712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.931034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.931045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.931231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.931242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.931565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.931575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.931908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.931918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.932215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.932225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.932420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.932432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.932753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.932763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.932982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.932991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.933362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.933373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.933549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.933559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.933853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.933863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.934309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.934320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.934626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.934638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.935035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.935045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.935144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.935154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.935499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.935510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.935825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.935835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.936033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.936044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.936252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.936263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.936443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.936454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.936748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.936759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.936951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.936960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.470 [2024-06-07 14:40:41.937140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.470 [2024-06-07 14:40:41.937152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.470 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.937451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.937462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.937736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.937747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.938022] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:38:18.471 [2024-06-07 14:40:41.938075] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.471 [2024-06-07 14:40:41.938077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.938089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.938421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.938432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.938749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.938759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.939081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.939091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.939415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.939427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.939792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.939803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.940067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.940079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.940378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.940389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.940713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.940725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.941036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.941048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.941309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.941321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.941673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.941685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.941990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.942001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.942358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.942369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.942555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.942566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.942880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.942891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.943183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.943200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.943276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.943286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.943574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.943586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.943953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.943964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.944274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.944286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.944464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.944476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.944653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.944665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.944962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.944974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.945188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.945236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.945529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.945541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.945666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.945679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.945943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.945954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.471 [2024-06-07 14:40:41.946299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.471 [2024-06-07 14:40:41.946311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.471 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.946429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.946441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.946516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.946527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.946833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.946845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.947181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.947193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.947521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.947532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.947838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.947849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.948284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.948296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.948583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.948595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.948855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.948866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.949252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.949264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.949586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.949597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.949926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.949938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.950239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.950251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.950574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.950586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.950941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.950952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.951020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.951031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.951279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.951291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.951566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.951578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.951934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.951945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.952281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.952294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.952635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.952647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.952962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.952974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.953273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.953285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.953495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.953507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.953678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.953695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.953954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.953966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.954229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.954240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.954417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.954428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.954725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.954736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.955065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.955077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.955266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.955278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.955608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.955620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.955927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.955938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.956316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.472 [2024-06-07 14:40:41.956327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.472 qpair failed and we were unable to recover it. 00:38:18.472 [2024-06-07 14:40:41.956639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.956651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.956953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.956964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.957185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.957201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.957410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.957421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.957727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.957738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.958088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.958099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.958487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.958498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.958717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.958728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.958943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.958954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.959149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.959160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.959513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.959525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.959741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.959753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.960101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.960112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.960424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.960437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.960757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.960768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.961080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.961090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.961301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.961313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.961548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.961560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.961864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.961875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.962136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.962148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.962460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.962472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.962696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.962707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.963022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.963034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.963343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.963354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.963690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.963701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.964079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.964090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.964423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.964435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.964744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.964755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.965092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.965102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.965320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.965330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.965639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.965649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.965869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.965879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.966251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.966261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.966605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.966615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.966923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.966934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.967269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.967280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.967630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.967640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.473 [2024-06-07 14:40:41.967979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.473 [2024-06-07 14:40:41.967990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.473 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.968262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.968272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.968582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.968593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.968970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.968981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.969193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.969208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.969568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.969580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.969886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.969897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.970260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.970270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.970633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.970644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.970988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.970999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.971292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.971303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.971651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.971661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.971971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.971982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.972205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.972216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.972440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.972450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.972765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.972775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.973091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.973101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.973422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.973433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.973748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.973759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.974058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.974069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.974291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.974301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 EAL: No free 2048 kB hugepages reported on node 1 00:38:18.474 [2024-06-07 14:40:41.974631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.974642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.974953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.974964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.975069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.975079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.975405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.975416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.975618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.975627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.975933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.975944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.976272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.976282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.976620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.976631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.976951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.976963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.977312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.977323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.977550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.977560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.977777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.977787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.978084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.978096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.978438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.978449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.978791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.474 [2024-06-07 14:40:41.978802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.474 qpair failed and we were unable to recover it. 00:38:18.474 [2024-06-07 14:40:41.979056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.979066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.979155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.979165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.979687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.979777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.980105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.980635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.980725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.980986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.980998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.981371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.981383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.981694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.981705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.981928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.981938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.982229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.982240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.982606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.982616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.982913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.982924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.983132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.983143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.983331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.983341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.983669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.983680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.983918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.983928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.984280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.984292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.984617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.984629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.984974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.984986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.985204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.985216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.985550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.985561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.985888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.985900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.986239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.986250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.986473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.986483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.986733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.986743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.987018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.987029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.987238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.987248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.987569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.987580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.987800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.987810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.988030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.988041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.988235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.988246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.988449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.988460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.988777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.988789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.989127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.989137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.989454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.989465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.989817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.989828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.990026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.990037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.475 [2024-06-07 14:40:41.990444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.475 [2024-06-07 14:40:41.990456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.475 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.990763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.990773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.991076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.991089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.991302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.991313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.991647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.991658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.991937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.991947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.992261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.992273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.992649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.992662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.992975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.992985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.993271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.993283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.993611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.993620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.993827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.993837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.994190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.994204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.994429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.994438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.994742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.994752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.995086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.995095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.995323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.995333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.995672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.995681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.995982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.995993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.996302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.996313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.996494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.996505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.996823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.996832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.997168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.997178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.997441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.997451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.997789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.997799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.998092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.998102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.998428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.998438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.998665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.476 [2024-06-07 14:40:41.998674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.476 qpair failed and we were unable to recover it. 00:38:18.476 [2024-06-07 14:40:41.998847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:41.998857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:41.999185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:41.999202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:41.999503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:41.999513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:41.999775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:41.999785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.000082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.000092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.000409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.000418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.000751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.000760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.000945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.000955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.001269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.001279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.001461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.001471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.001808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.001817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.002132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.002141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.002328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.002338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.002665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.002675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.003016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.003026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.003373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.003383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.003739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.003750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.004082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.004091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.004437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.004446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.004774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.004783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.005007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.005016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.005266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.005276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.005586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.005595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.005787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.005799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.006118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.006128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.006495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.006506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.006843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.006853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.007182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.007191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.007416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.007428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.007761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.007771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.008093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.008104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.008428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.008438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.008780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.008788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.008987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.008996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.009309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.009319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.009622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.009631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.009964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.009974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.010312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.477 [2024-06-07 14:40:42.010322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.477 qpair failed and we were unable to recover it. 00:38:18.477 [2024-06-07 14:40:42.010649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.010657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.011000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.011009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.011388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.011397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.011742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.011751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.012091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.012100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.012317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.012326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.012659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.012669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.013006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.013015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.013335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.013345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.013571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.013580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.013899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.013908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.014252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.014262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.014500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.014510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.014809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.014818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.015159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.015169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.015363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.015373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.015649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.015658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.015991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.016000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.016351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.016360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.016682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.016691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.016897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.016906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.017209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.017220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.017550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.017560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.017853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.017863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.018172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.018182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.018526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.018537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.018879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.018890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.019228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.019238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.019545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.019554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.019886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.019895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.020231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.020241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.020433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.020447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.020880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.020889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.021202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.021212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.021449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.478 [2024-06-07 14:40:42.021459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.478 qpair failed and we were unable to recover it. 00:38:18.478 [2024-06-07 14:40:42.021789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.021799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.022112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.022122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.022329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.022339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.022788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.022797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.023109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.023118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.023436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.023446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.023786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.023796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.024134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.024144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.024316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.024325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.024649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.024659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.025026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.025035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.025218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.025228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.025550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.025559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.025743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.025752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.026127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.026137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.026488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.026498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.026824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.026834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.027177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.027187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.027565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.027575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.027906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.027916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.028253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.028263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.028557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.028566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.028876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.028885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.029234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.029245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.029565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.029574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.029744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.029752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.029973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.029982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.030307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.030316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.030673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.030683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.030874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.030883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.031190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.031204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.031505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.031515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.031927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.031935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.032232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.032241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.032548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.032557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.032849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.032858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.033149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.479 [2024-06-07 14:40:42.033159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.479 qpair failed and we were unable to recover it. 00:38:18.479 [2024-06-07 14:40:42.033386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:18.479 [2024-06-07 14:40:42.033539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.033548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.033819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.033828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.034152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.034161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.034549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.034559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.034878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.034888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.035244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.035253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.035428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.035437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.035732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.035741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.036088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.036098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.036480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.036491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.036850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.036861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.037144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.037153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.037524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.037838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.037848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.038164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.038173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.038493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.038504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.038732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.038742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.039063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.039072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.039417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.039426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.039774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.039783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.039964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.039974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.040287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.040297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.040608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.040617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.040964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.040973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.041323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.041333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.041663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.041672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.041855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.041865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.042062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.042072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.042374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.042384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.042747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.042756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.043099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.043108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.043476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.043485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.043794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.043803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.044156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.044166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.044370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.044380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.044738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.044748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.045045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.045055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.045403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.045413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.045761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.480 [2024-06-07 14:40:42.045771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.480 qpair failed and we were unable to recover it. 00:38:18.480 [2024-06-07 14:40:42.046126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.046136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.046398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.046411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.046720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.046730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.047059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.047068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.047260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.047270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.047618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.047628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.047949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.047959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.048298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.048310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.048627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.048637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.048979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.048989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.049305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.049316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.049631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.049641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.049973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.049983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.050180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.050190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.050413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.050423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.050787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.050799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.050984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.050995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.051281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.051293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.051525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.051536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.051855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.051865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.052203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.052215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.052526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.052537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.052845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.052857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.053181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.053191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.053496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.053506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.053829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.053839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.054131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.054141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.054483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.054493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.054830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.054843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.055179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.055189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.055423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.055433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.055751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.055760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.481 qpair failed and we were unable to recover it. 00:38:18.481 [2024-06-07 14:40:42.056150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.481 [2024-06-07 14:40:42.056159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.056619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.056629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.056906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.056916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.057281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.057292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.057619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.057629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.057927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.057936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.058260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.058270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.058509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.058519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.058836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.058845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.059031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.059040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.059327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.059337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.059640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.059649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.059970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.059980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.060165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.060175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.060514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.060523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.060686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.060696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.061049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.061059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.061300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.061309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.061626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.061635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.062031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.062041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.062322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.062332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.062657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.062667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.062848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.062856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.063200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.063212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.063532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.063542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.063882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.063892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.064207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.064218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.064517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.064527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.064820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.064830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.065167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.065177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.065480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.065470] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.482 [2024-06-07 14:40:42.065489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.065502] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.482 [2024-06-07 14:40:42.065509] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.482 [2024-06-07 14:40:42.065516] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.482 [2024-06-07 14:40:42.065521] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.482 [2024-06-07 14:40:42.065724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.065733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.065703] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:38:18.482 [2024-06-07 14:40:42.065854] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:38:18.482 [2024-06-07 14:40:42.065977] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:38:18.482 [2024-06-07 14:40:42.066046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.066056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.065979] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:38:18.482 [2024-06-07 14:40:42.066277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.482 [2024-06-07 14:40:42.066287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.482 qpair failed and we were unable to recover it. 00:38:18.482 [2024-06-07 14:40:42.066623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.066633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.066834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.066843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.067010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.067020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.067353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.067363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.067632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.067643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.067831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.067843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.068053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.068062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.068296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.068306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.068645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.068653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.068864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.068874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.069151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.069160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.069358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.069368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.069591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.069600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.070027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.070039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.070225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.070235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.070521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.070530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.070835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.070844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.071178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.071188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.071548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.071559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.071901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.071911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.072132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.072142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.072493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.072503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.072840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.072850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.073060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.073070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.073403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.073413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.073754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.073763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.074073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.074082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.074420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.074430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.074614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.074623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.074800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.074810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.075101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.075111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.075283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.075293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.075665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.075676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.076010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.076019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.076399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.076409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.076603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.076612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.076806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.076815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.076987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.076996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.483 [2024-06-07 14:40:42.077332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.483 [2024-06-07 14:40:42.077341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.483 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.077560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.077569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.077874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.077886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.078212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.078223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.078392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.078401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.078825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.078835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.079206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.079216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.079429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.079438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.079741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.079750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.080095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.080105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.080430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.080439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.080617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.080627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.080836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.080845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.081047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.081056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.081375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.081385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.081683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.081692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.082003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.082013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.082204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.082214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.082410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.082420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.082632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.082642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.082976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.082986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.083278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.083288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.083626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.083636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.083957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.083967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.084038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.084047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.084385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.084395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.084603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.084612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.084942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.084952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.085137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.085147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.085351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.085360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.085576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.085586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.085909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.085918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.086252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.086262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.086624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.086635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.086975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.086985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.087228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.087239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.087530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.087540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.087837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.087847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.484 [2024-06-07 14:40:42.088234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.484 [2024-06-07 14:40:42.088244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.484 qpair failed and we were unable to recover it. 00:38:18.485 [2024-06-07 14:40:42.088552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.485 [2024-06-07 14:40:42.088562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.485 qpair failed and we were unable to recover it. 00:38:18.485 [2024-06-07 14:40:42.088902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.485 [2024-06-07 14:40:42.088912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.485 qpair failed and we were unable to recover it. 00:38:18.485 [2024-06-07 14:40:42.089277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.485 [2024-06-07 14:40:42.089287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.485 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.089459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.089469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.089682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.089697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.089875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.089885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.090176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.090185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.090521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.090531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.090918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.090927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.091108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.091119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.091467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.091477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.091796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.091806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.092095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.092105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.092319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.092329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.092652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.092661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.092848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.092857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.093187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.093201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.093505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.093514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.093723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.093733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.094021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.094030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.094369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.094379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.094581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.094590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.094877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.094886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.095179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.095189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.095624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.095633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.095820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.095829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.096118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.096127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.096302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.096313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.096720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.096730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.097068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.097078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.097271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.097281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.097660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.097672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.098019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.098028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.098375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.098384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.098736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.098746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.098930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.098940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.766 [2024-06-07 14:40:42.099136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.766 [2024-06-07 14:40:42.099146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.766 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.099438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.099448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.099621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.099631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.099846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.099856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.100181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.100191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.100507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.100517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.100835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.100844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.101143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.101153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.101502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.101512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.101719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.101728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.102075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.102085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.102284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.102294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.102624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.102634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.102976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.102986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.103293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.103304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.103511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.103520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.103854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.103863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.104201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.104211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.104531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.104540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.104902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.104912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.105095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.105105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.105481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.105491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.105554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.105564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.105840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.105850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.106171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.106180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.106537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.106547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.106890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.106901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.107246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.107256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.107457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.107467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.107518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.107526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.107842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.107852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.108202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.108212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.108539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.108548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.108718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.108727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.108974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.108984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.109268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.109278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.109473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.109482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.109855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.767 [2024-06-07 14:40:42.109865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.767 qpair failed and we were unable to recover it. 00:38:18.767 [2024-06-07 14:40:42.110207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.110217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.110417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.110426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.110776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.110786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.110984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.110993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.111316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.111326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.111664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.111674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.111863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.111873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.112217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.112227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.112541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.112550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.112895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.112904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.113204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.113214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.113484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.113493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.113823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.113833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.114190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.114206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.114402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.114413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.114767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.114777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.114961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.114970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.115292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.115302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.115512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.115522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.115744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.115754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.116063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.116073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.116262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.116273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.116452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.116462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.116798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.116807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.117094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.117103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.117423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.117434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.117818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.117827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.118120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.118130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.118320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.118331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.118704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.118714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.119104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.119114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.119495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.119505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.119875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.119885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.120185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.120207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.120548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.120558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.120884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.120895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.768 [2024-06-07 14:40:42.121238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.768 [2024-06-07 14:40:42.121248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.768 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.121507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.121516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.121715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.121724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.122123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.122133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.122456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.122466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.122813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.122822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.123023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.123032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.123365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.123374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.123713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.123723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.124021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.124030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.124207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.124216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.124529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.124538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.124839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.124848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.125171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.125181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.125589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.125598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.125922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.125932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.126277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.126289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.126672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.126682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.127022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.127032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.127216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.127227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.127519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.127529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.127747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.127756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.128080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.128089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.128393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.128403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.128704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.128713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.129052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.129062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.129324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.129333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.129595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.129604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.129820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.129829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.129958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.129967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.130174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.130183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.130481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.130490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.130669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.130679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.131074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.131084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.131416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.131426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.131596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.131605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.769 qpair failed and we were unable to recover it. 00:38:18.769 [2024-06-07 14:40:42.131929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.769 [2024-06-07 14:40:42.131938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.132274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.132284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.132589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.132598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.132803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.132812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.132991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.132999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.133303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.133313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.133673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.133682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.133877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.133888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.134192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.134206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.134394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.134403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.134681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.134691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.135012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.135192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.135213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.135509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.135517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.135759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.135768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.136117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.136127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.136326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.136336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.136707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.136716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.136900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.136909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.137105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.137115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.137451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.137462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.137837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.137847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.138045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.138054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.138375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.138385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.138712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.138721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.139056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.139065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.139399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.139409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.139727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.770 [2024-06-07 14:40:42.139736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.770 qpair failed and we were unable to recover it. 00:38:18.770 [2024-06-07 14:40:42.140088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.140097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.140291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.140300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.140622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.140632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.140679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.140687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.140969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.140977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.141152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.141163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.141461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.141471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.141801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.141810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.141995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.142004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.142233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.142244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.142555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.142564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.142912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.142921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.143246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.143255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.143430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.143439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.143618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.143627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.143797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.143805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.144107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.144117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.144319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.144328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.144653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.144662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.144876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.771 [2024-06-07 14:40:42.144885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.771 qpair failed and we were unable to recover it. 00:38:18.771 [2024-06-07 14:40:42.145224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.145235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.145554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.145563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.145806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.145816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.146162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.146171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.146431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.146440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.146626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.146636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.147008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.147017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.147213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.147223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.147446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.147455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.147768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.147777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.147873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.147881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.148055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.148064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.148256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.148266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.148444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.148452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.148644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.148654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.148989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.148998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.149334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.149344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.149554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.149563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.149768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.149777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.149999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.150008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.150100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.150109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.150322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.150331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.150648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.150658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.150856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.150866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.151081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.151090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.151261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.151270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.151573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.151582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.772 [2024-06-07 14:40:42.151768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.772 [2024-06-07 14:40:42.151779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.772 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.151987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.151996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.152184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.152201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.152506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.152516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.152684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.152693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.152892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.152900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.153080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.153090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.153264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.153274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.153466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.153475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.153771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.153780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.154101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.154110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.154170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.154178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.154450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.154459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.154769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.154778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.154971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.154981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.155307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.155317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.155670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.155681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.156066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.156075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.156262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.156271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.156648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.156657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.157015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.157025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.157222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.157232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.157427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.157436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.157728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.157737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.157914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.157923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.158214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.158224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.158536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.158545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.773 [2024-06-07 14:40:42.158720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.773 [2024-06-07 14:40:42.158731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.773 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.159060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.159070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.159370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.159380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.159484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.159493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.159708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.159718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.159879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.159888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.160086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.160095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.160400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.160410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.160624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.160634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.160850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.160859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.161200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.161210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.161379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.161388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.161721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.161730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.161915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.161924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.162253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.162263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.162609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.162618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.162929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.162938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.163136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.163145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.163533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.163544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.163718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.163727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.164059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.164068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.164383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.164393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.164592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.164602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.164941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.164951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.165141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.165149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.165509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.165519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.165817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.165827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.166022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.166033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.166372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.166382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.166563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.166572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.166873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.774 [2024-06-07 14:40:42.166882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.774 qpair failed and we were unable to recover it. 00:38:18.774 [2024-06-07 14:40:42.167059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.167068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.167277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.167287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.167470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.167479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.167661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.167670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.168018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.168027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.168349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.168359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.168406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.168415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.168739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.168747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.169087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.169096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.169277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.169288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.169594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.169603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.169952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.169961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.170270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.170280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.170593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.170602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.170815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.170824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.171124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.171133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.171309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.171318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.171677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.171686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.172036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.172045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.172378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.172388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.172626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.172636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.172951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.172961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.173309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.173319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.173732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.173741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.174034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.174045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.174356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.174366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.174731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.174740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.175050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.175059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.175256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.175265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.175586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.175595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.175919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.175928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.176121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.176130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.176333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.176342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.176567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.176576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.176760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.176768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.177083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.775 [2024-06-07 14:40:42.177092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.775 qpair failed and we were unable to recover it. 00:38:18.775 [2024-06-07 14:40:42.177296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.177305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.177693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.177703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.178054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.178064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.178396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.178406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.178752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.178761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.179093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.179102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.179485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.179494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.179664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.179673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.179879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.179888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.180192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.180205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.180420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.180429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.180599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.180608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.180818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.180827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.181132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.181141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.181458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.181467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.181663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.181673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.182026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.182035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.182295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.182304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.182626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.182636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.182837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.182846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.183186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.183199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.183388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.183397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.183589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.183598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.183904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.183912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.184096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.184105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.184504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.184514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.184852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.184861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.185055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.185065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.185282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.185294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.185594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.776 [2024-06-07 14:40:42.185604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.776 qpair failed and we were unable to recover it. 00:38:18.776 [2024-06-07 14:40:42.185799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.185808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.186139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.186148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.186361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.186370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.186577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.186587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.186793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.186803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.187126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.187136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.187510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.187520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.187713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.187723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.187904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.187913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.188250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.188260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.188450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.188460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.188647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.188656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.189022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.189031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.189185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.189198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.189387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.189397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.189689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.189699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.190060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.190069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.190388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.190399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.190450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.190459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.190639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.190650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.191027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.191037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.191427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.191438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.191758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.191768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.192072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.192081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.192381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.192390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.192695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.192706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.192877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.192886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.193055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.193064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.193384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.193393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.778 [2024-06-07 14:40:42.193612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.778 [2024-06-07 14:40:42.193621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.778 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.193977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.193987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.194200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.194210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.194390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.194399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.194718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.194727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.195070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.195080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.195263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.195273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.195318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.195326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.195646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.195655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.195972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.195982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.196328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.196339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.196659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.196668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.196871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.196880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.196921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.196930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.197323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.197332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.197661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.197671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.197843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.197852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.198249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.198259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.198352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.198361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.198740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.198749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.199060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.199070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.199150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.199158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.199453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.199462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.199638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.199648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.199817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.199827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.200038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.200048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.200269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.200281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.200627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.200638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.200932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.200943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.201021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.201030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.201304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.201317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.201692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.201703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.201749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.201759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.779 [2024-06-07 14:40:42.201931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.779 [2024-06-07 14:40:42.201942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.779 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.202279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.202291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.202637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.202648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.202836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.202847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.202985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.202997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.203237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.203249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.203543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.203553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.203896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.203907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.204213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.204224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.204590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.204601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.204792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.204802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.205105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.205115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.205318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.205328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.205662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.205672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.206023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.206034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.206352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.206363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.206709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.206719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.206859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.206869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.207209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.207220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.207539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.207550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.207747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.207757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.208049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.208058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.208366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.208376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.208556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.208566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.208875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.208886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.209073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.209083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.209282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.209292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.209652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.209662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.209999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.210010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.210327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.210337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.210496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.210506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.210686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.210699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.211030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.780 [2024-06-07 14:40:42.211040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.780 qpair failed and we were unable to recover it. 00:38:18.780 [2024-06-07 14:40:42.211441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.211452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.211758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.211769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.211996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.212007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.212314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.212324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.212642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.212652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.212984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.212995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.213329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.213340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.213532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.213542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.213584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.213594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.213724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.213735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.214073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.214083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.214283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.214293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.214618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.214628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.214940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.214951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.215267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.215278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.215599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.215610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.215656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.215664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.215832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.215842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.216184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.216198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.216506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.216517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.216812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.216822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.217114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.217124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.217497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.217508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.217734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.217744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.218077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.218088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.218277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.218289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.218629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.218640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.218952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.218963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.219148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.219158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.219350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.219361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.219690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.219702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.219887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.219898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.220072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.220083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.220396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.220406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.220765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.220775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.220956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.220966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.781 [2024-06-07 14:40:42.221355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.781 [2024-06-07 14:40:42.221366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.781 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.221710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.221721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.222082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.222093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.222408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.222419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.222729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.222740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.223085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.223096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.223296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.223307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.223627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.223637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.223924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.223935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.224227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.224238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.224451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.224461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.224795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.224805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.225179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.225190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.225567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.225577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.225901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.225912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.226234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.226245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.226587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.226600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.226788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.226799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.227102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.227112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.227445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.227456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.227635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.227645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.227957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.227969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.228269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.228280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.228452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.228461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.228807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.228817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.229157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.229167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.229479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.229490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.229824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.229834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.230151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.230162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.230348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.230358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.230692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.230703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.231056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.231067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.231390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.231402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.231603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.782 [2024-06-07 14:40:42.231613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.782 qpair failed and we were unable to recover it. 00:38:18.782 [2024-06-07 14:40:42.231949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.231961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.232279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.232290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.232603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.232613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.232894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.232905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.233217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.233229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.233569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.233579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.233765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.233775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.234071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.234081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.234315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.234325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.234690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.234701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.235017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.235029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.235370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.235380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.235562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.235572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.235961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.235972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.236287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.236299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.236587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.236597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.236912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.236923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.237237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.237247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.237415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.237425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.237750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.237760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.237958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.237967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.238282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.238293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.238607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.238618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.238964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.238975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.239156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.239166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.239468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.239479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.239812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.239823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.240167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.240178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.783 [2024-06-07 14:40:42.240496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.783 [2024-06-07 14:40:42.240507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.783 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.240558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.240568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.240908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.240919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.241146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.241157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.241471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.241482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.241813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.241824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.242156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.242168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.242472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.242483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.242818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.242829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.243147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.243158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.243339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.243351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.243628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.243639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.243827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.243838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.244223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.244235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.244573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.244583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.244749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.244759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.244939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.244949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.245116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.245126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.245317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.245327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.245504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.245515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.245740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.245751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.245929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.245939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.246111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.246124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.246416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.246427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.246725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.246736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.247049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.247060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.247242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.247253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.247571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.247581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.247920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.247930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.248245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.248255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.248553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.248564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.248849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.248861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.249024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.249034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.249368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.249378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.249672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.249683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.249998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.250009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.250189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.784 [2024-06-07 14:40:42.250205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.784 qpair failed and we were unable to recover it. 00:38:18.784 [2024-06-07 14:40:42.250552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.250562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.250880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.250891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.251207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.251218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.251564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.251574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.251890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.251899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.252207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.252219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.252556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.252566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.252911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.252923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.253254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.253264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.253585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.253596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.253768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.253779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.254115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.254125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.254438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.254453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.254790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.254801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.255112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.255346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.255356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.255653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.255673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.255984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.255994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.256207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.256217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.256264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.256273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.256569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.256579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.256917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.256928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.257267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.257277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.257469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.257479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.257805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.257815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.258079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.258089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.258420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.258431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.258559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.258568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.258760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.258770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.259114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.259124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.259433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.259443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.259631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.259642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.259805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.259815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.260131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.260141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.260520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.260530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.260861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.260872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.785 qpair failed and we were unable to recover it. 00:38:18.785 [2024-06-07 14:40:42.261184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.785 [2024-06-07 14:40:42.261198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.261538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.261549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.261779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.261790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.262058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.262068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.262250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.262260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.262448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.262458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.262797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.262807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.263124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.263134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.263473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.263483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.263791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.263803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.263995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.264005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.264332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.264342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.264667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.264677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.264859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.264868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.265213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.265223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.265574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.265584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.265886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.265897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.266242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.266253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.266304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.266313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.266578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.266589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.266928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.266938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.267107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.267117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.267440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.267451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.267783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.267794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.267980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.267991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.268308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.268319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.268668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.268679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.269003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.269013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.269329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.269340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.269508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.269518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.269854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.269864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.270202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.270213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.270390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.270400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.270708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.270718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.271044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.271055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.271413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.271423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.271749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.271760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.786 qpair failed and we were unable to recover it. 00:38:18.786 [2024-06-07 14:40:42.272102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.786 [2024-06-07 14:40:42.272113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.272435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.272445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.272706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.272717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.273032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.273043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.273412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.273423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.273764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.273775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.274089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.274099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.274298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.274310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.274649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.274659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.274948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.274959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.275284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.275295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.275338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.275347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.275679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.275690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.275881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.275892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.276079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.276090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.276454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.276465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.276774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.276786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.277126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.277137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.277335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.277345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.277579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.277590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.277935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.277946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.278293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.278304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.278507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.278517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.278825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.278836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.279176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.279188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.279482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.279493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.279685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.279695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.279982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.279992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.280181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.280190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.280361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.280372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.280690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.280700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.281034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.281045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.281236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.281247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.281554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.281564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.281748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.281762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.282081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.282092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.282428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.787 [2024-06-07 14:40:42.282439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.787 qpair failed and we were unable to recover it. 00:38:18.787 [2024-06-07 14:40:42.282632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.282642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.282944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.282955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.283130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.283142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.283472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.283484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.283667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.283678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.283861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.283873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.284188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.284208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.284525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.284537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.284744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.284753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.285038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.285049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.285369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.285381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.285704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.285715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.286114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.286126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.286308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.286318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.286514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.286524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.286856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.286867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.287209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.287220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.287535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.287546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.287862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.287872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.288186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.288201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.288541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.288551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.288865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.288875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.289208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.289220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.289544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.289554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.289892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.289904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.290235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.290246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.290578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.290589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.290929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.290941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.291179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.291190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.291504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.788 [2024-06-07 14:40:42.291516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.788 qpair failed and we were unable to recover it. 00:38:18.788 [2024-06-07 14:40:42.291691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.291701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.292059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.292070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.292418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.292429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.292741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.292752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.293066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.293078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.293262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.293272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.293600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.293611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.293793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.293805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.293975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.293987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.294304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.294314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.294652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.294663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.294811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.294822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.295212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.295224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.295553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.295564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.295892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.295903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.296229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.296241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.296576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.296588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.296892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.296904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.297208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.297223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.297552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.297562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.297895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.297906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.298143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.298153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.298488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.298498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.298878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.298888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.299201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.299211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.299588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.299600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.299901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.299913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.300097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.300108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.300397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.300408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.300733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.300744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.301088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.301098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.301429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.301439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.301807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.301818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.302132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.302143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.302532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.302543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.789 [2024-06-07 14:40:42.302875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.789 [2024-06-07 14:40:42.302886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.789 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.303202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.303213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.303487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.303497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.303689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.304019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.304031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.304364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.304376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.304711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.304721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.305066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.305076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.305270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.305280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.305586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.305597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.305763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.305773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.305986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.305996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.306312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.306323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.306658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.306669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.306999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.307009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.307200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.307211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.307257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.307265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.307562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.307571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.307882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.307893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.308056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.308066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.308249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.308259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.308565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.308575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.308955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.308967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.309269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.309280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.309598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.309609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.309950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.309962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.310154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.310164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.310366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.310378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.310575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.310586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.310918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.310929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.311256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.311267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.311443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.311453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.311648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.311658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.311837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.311849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.312075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.312085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.312422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.312441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.790 [2024-06-07 14:40:42.312780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.790 [2024-06-07 14:40:42.312790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.790 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.313104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.313115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.313461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.313471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.313652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.313662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.313979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.313989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.314367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.314379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.314690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.314701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.315012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.315023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.315370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.315381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.315571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.315581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.315659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.315669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.315967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.315977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.316161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.316171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.316369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.316380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.316694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.316705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.317017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.317028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.317344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.317355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.317697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.317707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.318077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.318090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.318403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.318414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.318770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.318781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.319178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.319188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.319492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.319502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.319818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.319829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.320020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.320032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.320208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.320219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.320526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.320537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.320856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.320868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.321179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.321190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.321385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.321395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.321560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.321569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.321874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.321884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.322219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.322230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.322556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.322566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.322759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.322770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.323092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.791 [2024-06-07 14:40:42.323101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.791 qpair failed and we were unable to recover it. 00:38:18.791 [2024-06-07 14:40:42.323365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.323376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.323557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.323567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.323861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.323871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.324029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.324039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.324360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.324372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.324669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.324680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.324980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.324991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.325310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.325320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.325662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.325674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.325959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.325970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.326150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.326160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.326209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.326219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.326534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.326545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.326862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.326873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.327212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.327223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.327544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.327555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.327742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.327752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.328085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.328096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.328403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.328414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.328744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.328755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.329092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.329103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.329289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.329301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.329616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.329627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.329783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.329793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.330175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.330515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.330526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.330836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.330848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.331180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.331191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.331518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.331529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.331827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.331839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.332205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.332216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.332400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.332410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.332642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.332651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.332981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.332992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.333343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.792 [2024-06-07 14:40:42.333354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.792 qpair failed and we were unable to recover it. 00:38:18.792 [2024-06-07 14:40:42.333635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.333645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.333827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.333837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.334122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.334132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.334465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.334475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.334652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.334662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.334992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.335003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.335353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.335363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.335704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.335715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.336083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.336093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.336267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.336278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.336390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.336401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.336947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.337038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.337532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.337620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.338031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.338066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.338431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.338463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.338812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.338824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.338872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.338880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.339174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.339184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.339507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.339518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.339704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.339715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.339890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.339902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.340022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.340032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.340366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.340377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.340662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.340673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.341011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.341021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.341207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.341219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.793 qpair failed and we were unable to recover it. 00:38:18.793 [2024-06-07 14:40:42.341593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.793 [2024-06-07 14:40:42.341604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.341908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.341919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.341959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.341969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.342335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.342347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.342671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.342681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.342848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.342859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.343023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.343032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.343224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.343234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.343564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.343574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.343761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.343771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.344102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.344112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.344436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.344447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.344793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.344804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.345116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.345127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.345342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.345353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.345683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.345693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.346034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.346047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.346372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.346384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.346556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.346567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.346915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.346926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.347141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.347151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.347309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.347319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.347668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.347678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.347900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.347910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.348106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.348117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.348412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.348423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.348749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.348760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.349136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.794 [2024-06-07 14:40:42.349147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.794 qpair failed and we were unable to recover it. 00:38:18.794 [2024-06-07 14:40:42.349472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.349483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.349825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.349837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.350156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.350167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.350441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.350452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.350782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.350793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.350970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.350981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.351334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.351345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.351694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.351705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.352036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.352047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.352360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.352370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.352713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.352724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.352902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.352912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.353085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.353097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.353329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.353340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.353676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.353687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.353821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.353835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.354071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.354082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.354412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.354424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.354737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.354748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.355064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.355075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.355262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.355274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.355642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.355653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.355971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.355983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.356280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.356292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.356473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.356485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.356686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.356698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.356892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.356903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.357192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.795 [2024-06-07 14:40:42.357208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.795 qpair failed and we were unable to recover it. 00:38:18.795 [2024-06-07 14:40:42.357547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.357559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.357891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.357903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.358091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.358102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.358462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.358474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.358815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.358826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.359145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.359156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.359488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.359499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.359832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.359843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.360190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.360206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.360517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.360529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.360844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.360855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.361190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.361209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.361539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.361550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.361928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.361939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.362249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.362261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.362597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.362609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.362948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.362960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.363267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.363279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.363594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.363605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.363935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.363947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.364286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.364297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.364612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.364623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.364932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.364944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.365272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.365284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.365664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.365675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.365988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.366000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.366297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.366307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.796 qpair failed and we were unable to recover it. 00:38:18.796 [2024-06-07 14:40:42.366630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.796 [2024-06-07 14:40:42.366640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.366941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.366953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.367136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.367146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.367465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.367475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.367789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.367799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.368140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.368150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.368457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.368469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.368782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.368792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.368970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.368981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.369311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.369322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.369637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.369647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.369961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.369971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.370283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.370294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.370423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.370435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.370790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.370800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.370987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.370998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.371209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.371220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.371601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.371612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.371931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.371942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.372258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.372268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.372432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.372442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.372782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.372793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.373132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.373142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.373455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.373466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.373773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.373784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.373837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.373847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.374138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.374149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.797 [2024-06-07 14:40:42.374463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.797 [2024-06-07 14:40:42.374475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.797 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.374662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.374675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.374813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.374824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.375141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.375152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.375343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.375354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.375677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.375688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.375870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.375882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.376050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.376061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.376387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.376397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.376726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.376736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.377068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.377078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.377411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.377422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.377606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.377616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.377798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.377808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.378110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.378121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.378447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.378458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.378787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.378799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.379128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.379138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.379454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.379465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.379811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.379821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.380138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.380149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.380454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.380465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.380647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.380657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.380968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.380978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.381165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.381175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.381473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.381484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.381815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.381827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.798 [2024-06-07 14:40:42.382170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.798 [2024-06-07 14:40:42.382180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.798 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.382363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.382375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.382648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.382658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.382846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.382857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.382998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.383008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.383335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.383346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.383682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.383693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.384022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.384033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.384211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.384223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.384423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.384433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.384753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.384765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.385070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.385080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.385384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.385395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.385767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.385778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.385958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.385968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.386261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.386272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.386453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.386464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.386802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.386812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.387140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.387151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.387497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.387508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.799 qpair failed and we were unable to recover it. 00:38:18.799 [2024-06-07 14:40:42.387856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.799 [2024-06-07 14:40:42.387867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.388200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.388211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.388407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.388417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.388648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.388657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.388823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.388834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.389168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.389179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.389508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.389519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.389939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.389950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.390244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.390256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.390451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.390462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.390696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.390706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.391040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.391050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.391393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.391404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.391738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.391748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.391928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.391939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.392235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.392246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.392566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.392578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:18.800 [2024-06-07 14:40:42.392619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.800 [2024-06-07 14:40:42.392628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:18.800 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.392923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.392934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.393266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.393278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.393470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.393480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.393816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.393827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.394215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.394226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.394413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.394422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.394751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.394763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.394947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.394957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.395269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.395280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.395599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.395610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.395939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.395950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.396172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.396183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.396381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.396392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.396571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.396581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.396666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.396676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.396981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.396992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.397361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.397372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.397717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.397728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.398041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.398051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.398233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.398244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.398570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.398580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.398921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.398940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.399253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.399264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.399559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.399570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.399915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.399925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.400111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.400121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.400438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.400450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.079 qpair failed and we were unable to recover it. 00:38:19.079 [2024-06-07 14:40:42.400641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.079 [2024-06-07 14:40:42.400651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.400961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.400971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.401268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.401280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.401490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.401500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.401687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.401697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.402032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.402042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.402379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.402390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.402569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.402579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.402899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.402909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.403091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.403101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.403384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.403395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.403601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.403612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.403819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.403829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.404114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.404125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.404360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.404371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.404691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.404701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.404880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.404890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.405101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.405110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.405442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.405454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.405770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.405780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.406101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.406112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.406227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.406237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.406422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.406433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.406747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.406759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.406950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.406961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.407266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.407276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.407636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.407646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.407983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.407994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.408182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.408193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.408408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.408418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.408609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.408620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.408963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.408976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.409317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.409331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.409633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.409644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.409871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.409881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.410182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.410193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.410498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.410508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.410810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.410821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.080 [2024-06-07 14:40:42.411131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.080 [2024-06-07 14:40:42.411143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.080 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.411515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.411527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.411845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.411856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.412040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.412050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.412274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.412284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.412584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.412595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.412776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.412787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.413082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.413092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.413270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.413282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.413618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.413628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.413675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.413683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.413844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.413854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.414171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.414182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.414376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.414387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.414701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.414713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.414789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.414798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.414949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.414960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.415145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.415156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.415325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.415337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.415610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.415621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.415959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.415973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.416315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.416327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.416380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.416388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.416613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.416623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.416806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.416816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.417165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.417176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.417508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.417519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.417844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.417854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.418053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.418064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.418250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.418260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.418579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.418590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.418904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.418915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.419260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.419271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.419594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.419606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.419795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.419805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.420128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.420139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.420461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.420472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.420661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.420671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.420933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.081 [2024-06-07 14:40:42.420943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.081 qpair failed and we were unable to recover it. 00:38:19.081 [2024-06-07 14:40:42.421276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.421288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.421628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.421638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.421820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.421830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.422170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.422181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.422233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.422242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.422555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.422565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.422867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.422879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.423094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.423105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.423295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.423306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.423639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.423650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.423813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.423825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.424161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.424172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.424547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.424559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.424747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.424758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.425096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.425108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.425421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.425433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.425613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.425624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.425924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.425935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.426258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.426269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.426584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.426594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.426756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.426767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.427104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.427114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.427363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.427374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.427542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.427561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.427739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.427751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.428103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.428115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.428279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.428291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.428628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.428638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.428820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.428829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.429151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.429162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.429485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.429495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.429727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.429737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.430077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.430088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.430285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.430298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.430475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.430487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.430831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.430841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.431011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.431021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.431362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.431373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.082 [2024-06-07 14:40:42.431598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.082 [2024-06-07 14:40:42.431608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.082 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.431942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.431952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.432260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.432270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.432362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.432372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108c730 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Read completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 Write completed with error (sct=0, sc=8) 00:38:19.083 starting I/O failed 00:38:19.083 [2024-06-07 14:40:42.432578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.083 [2024-06-07 14:40:42.432893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.432909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.433189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.433212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.433669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.433698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.433883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.433892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.434069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.434076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.434510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.434540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.434883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.434892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.435231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.435566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.435575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.435907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.435916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.436234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.436242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.436578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.436586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.436918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.436926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.437326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.437334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.437531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.437539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.437760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.437768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.438013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.083 [2024-06-07 14:40:42.438021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.083 qpair failed and we were unable to recover it. 00:38:19.083 [2024-06-07 14:40:42.438367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.438375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.438677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.438685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.439008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.439017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.439374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.439382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.439726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.439735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.440059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.440067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.440377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.440385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.440571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.440579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.440818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.440825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.441142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.441150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.441338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.441346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.441529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.441537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.442084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.442175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.442730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.442819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.443116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.443150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.443684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.443772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2184000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.444212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.444230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.444471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.444480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.444805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.444813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.445132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.445140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.445231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.445238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.445390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.445397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.445586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.445593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.445920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.445930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.446253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.446262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.446555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.446563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.446617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.446623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.446927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.446935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.447262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.447271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.447430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.447437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.447775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.447782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.447955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.447963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.448267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.448274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.448694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.448702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.448882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.448890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.449165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.449173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.449506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.449514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.449558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.084 [2024-06-07 14:40:42.449565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.084 qpair failed and we were unable to recover it. 00:38:19.084 [2024-06-07 14:40:42.449885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.449893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.450212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.450220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.450557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.450566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.450749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.450757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.451098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.451106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.451419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.451427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.451643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.451650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.451994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.452002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.452316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.452324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.452494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.452502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.452834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.452841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.453075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.453084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.453297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.453305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.453647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.453655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.453955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.453963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.454298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.454306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.454642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.454650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.454834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.454842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.455095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.455102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.455265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.455274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.455617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.455625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.455804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.455811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.455971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.455979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.456310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.456318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.456649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.456658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.457081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.457090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.457386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.457393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.457698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.457706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.457891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.457899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.458204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.458213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.458541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.458549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.458733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.458741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.459079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.459086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.459265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.459274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.459584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.459592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.459906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.460221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.460230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.460601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.460608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.085 qpair failed and we were unable to recover it. 00:38:19.085 [2024-06-07 14:40:42.460923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.085 [2024-06-07 14:40:42.460931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.461230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.461240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.461423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.461431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.461621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.461628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.461788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.461796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.461952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.461962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.462278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.462287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.462603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.462610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.462926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.462934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.463273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.463281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.463604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.463612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.463885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.463893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.464178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.464186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.464479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.464487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.464718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.464725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.465066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.465074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.465230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.465238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.465581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.465589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.465781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.465789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.466097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.466106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.466285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.466294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.466619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.466628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.466944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.466953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.467275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.467283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.467464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.467471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.467836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.467844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.468205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.468213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.468530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.468539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.468834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.468842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.469181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.469190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.469382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.469390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.469612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.469621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.469954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.469962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.470336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.470344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.470656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.470664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.470987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.470994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.471381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.471389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.471724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.471733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.086 [2024-06-07 14:40:42.472088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.086 [2024-06-07 14:40:42.472096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.086 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.472261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.472268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.472650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.472658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.472967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.472975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.473271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.473279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.473617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.473626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.473944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.473952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.474255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.474263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.474601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.474609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.474649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.474655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.474966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.474974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.475330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.475338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.475687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.475695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.476015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.476024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.476344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.476352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.476535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.476543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.476857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.476865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.477186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.477196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.477397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.477404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.477714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.477723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.478073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.478081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.478393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.478401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.478733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.478741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.479052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.479061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.479238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.479247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.479441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.479450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.479639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.479647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.479971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.479980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.480168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.480175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.480379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.480390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.480691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.480699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.481053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.481061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.481380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.481387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.481572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.481581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.481894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.481901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.482170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.482178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.482565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.482573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.482886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.087 [2024-06-07 14:40:42.482895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.087 qpair failed and we were unable to recover it. 00:38:19.087 [2024-06-07 14:40:42.483130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.483139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.483539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.483547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.483717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.483724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.483951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.483960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.484244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.484252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.484459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.484466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.484652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.484660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.484835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.484844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.485195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.485204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.485481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.485488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.485799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.485807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.485979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.485986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.486264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.486272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.486459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.486467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.486780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.486788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.487091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.487099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.487397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.487405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.487712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.487720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.488011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.488018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.488340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.488348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.488589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.488596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.488900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.488908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.489092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.489099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.489367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.489375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.489682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.489690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.489879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.489887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.490193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.490203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.490495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.490504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.490858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.490866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.491173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.491182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.491504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.491512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.491848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.491858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.492262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.088 [2024-06-07 14:40:42.492270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.088 qpair failed and we were unable to recover it. 00:38:19.088 [2024-06-07 14:40:42.492632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.492639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.492935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.492944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.493260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.493269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.493592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.493599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.493917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.493926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.494233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.494242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.494595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.494603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.494780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.494788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.494943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.494949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.495243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.495259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.495572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.495579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.495894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.495903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.496255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.496264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.496596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.496604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.496781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.496790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.497088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.497096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.497285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.497294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.497471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.497480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.497810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.497818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.498131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.498140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.498466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.498474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.498663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.498671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.498997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.499006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.499328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.499336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.499664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.499672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.499982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.499990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.500164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.500172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.500311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.500319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.500602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.500610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.500960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.500968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.501282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.501290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.501620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.501627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.501978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.501986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.502182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.502189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.502539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.502547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.502860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.502870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.503181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.503188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.089 [2024-06-07 14:40:42.503524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.089 [2024-06-07 14:40:42.503533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.089 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.503724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.503734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.503900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.503907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.504166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.504175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.504512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.504521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.504841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.504849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.505154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.505161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.505475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.505483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.505771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.505780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.506091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.506099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.506427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.506436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.506622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.506630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.506796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.506803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.507127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.507136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.507322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.507330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.507641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.507650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.507993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.508001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.508185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.508196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.508506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.508514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.508865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.508873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.509218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.509227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.509414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.509423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.509742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.509751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.509939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.509947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.510238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.510246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.510435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.510443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.510759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.510767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.511080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.511087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.511368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.511377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.511699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.511707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.512065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.512073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.512366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.512374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.512678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.512686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.512999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.513008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.513199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.513208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.513496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.513504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.513685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.513693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.513733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.513739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.514023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.090 [2024-06-07 14:40:42.514032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.090 qpair failed and we were unable to recover it. 00:38:19.090 [2024-06-07 14:40:42.514347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.514355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.514682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.514690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.514879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.514889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.515136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.515145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.515322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.515330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.515655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.515664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.515997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.516005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.516285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.516293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.516615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.516624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.516801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.516808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.517142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.517151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.517330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.517339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.517554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.517561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.517724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.517732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.518026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.518034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.518228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.518236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.518427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.518435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.518765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.518772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.519091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.519100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.519284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.519293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.519469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.519477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.519803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.519811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.520145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.520153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.520439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.520447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.520609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.520616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.520939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.520948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.521137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.521146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.521430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.521438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.521767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.521775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.521967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.521974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.522128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.522135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.522428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.522437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.522610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.522618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.522929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.522937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.523229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.523237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.523541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.523549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.523858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.523866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.524187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.091 [2024-06-07 14:40:42.524198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.091 qpair failed and we were unable to recover it. 00:38:19.091 [2024-06-07 14:40:42.524504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.524512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.524802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.524811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.525139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.525147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.525338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.525345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.525523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.525532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.525693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.525700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.526014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.526021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.526346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.526354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.526535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.526543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.526830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.526837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.527149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.527158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.527473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.527481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.527813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.527821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.528130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.528137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.528285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.528293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.528593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.528601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.528897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.528905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.529216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.529224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.529545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.529554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.529923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.529931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.530085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.530093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.530266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.530274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.530439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.530448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.530613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.530621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.530801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.530808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.531116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.531124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.531439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.531446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.531602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.531609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.531916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.531925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.532240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.532247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.532569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.532576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.532893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.532903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.533246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.533255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.533425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.533431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.533798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.533806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.533980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.092 [2024-06-07 14:40:42.533988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.092 qpair failed and we were unable to recover it. 00:38:19.092 [2024-06-07 14:40:42.534163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.534172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.534500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.534510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.534820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.534829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.535147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.535155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.535483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.535492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.535806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.535814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.535992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.535999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.536040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.536046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.536351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.536360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.536676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.536684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.536999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.537008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.537170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.537179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.537464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.537472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.537712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.537720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.538031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.538041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.538343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.538351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.538670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.538686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.538872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.538880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.539032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.539039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.539233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.539241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.539586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.539594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.539640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.539646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.539924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.539931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.540108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.540117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.540452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.540460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.540763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.540772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.541109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.541117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.541430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.541438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.541616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.541623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.541915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.541923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.542213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.542221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.542538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.542546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.542907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.542915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.543215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.543222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.543552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.093 [2024-06-07 14:40:42.543561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.093 qpair failed and we were unable to recover it. 00:38:19.093 [2024-06-07 14:40:42.543870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.543880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.544064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.544072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.544371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.544378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.544677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.544685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.544841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.544850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.545173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.545182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.545489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.545498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.545789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.545797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.546101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.546109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.546443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.546452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.546766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.546774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.546961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.546970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.547230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.547239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.547420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.547428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.547721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.547728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.548037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.548045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.548233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.548241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.548410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.548418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.548746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.548754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.549076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.549084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.549411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.549418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.549723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.549731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.549909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.549916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.550221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.550229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.550648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.550655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.550834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.550841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.551011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.551019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.551351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.551358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.551743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.551752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.551909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.551918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.552192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.552203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.552500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.552509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.552669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.552678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.552962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.552970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.553272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.553281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.553450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.553457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.553808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.553816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.094 qpair failed and we were unable to recover it. 00:38:19.094 [2024-06-07 14:40:42.553995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.094 [2024-06-07 14:40:42.554002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.554217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.554225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.554270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.554278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.554554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.554563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.554738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.554745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.555027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.555035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.555305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.555313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.555497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.555504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.555671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.555678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.555860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.555868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.556195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.556203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.556355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.556363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.556701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.556710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.556922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.556931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.557260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.557269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.557457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.557465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.557812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.557820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.558135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.558143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.558457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.558465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.558765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.558773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.559098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.559106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.559463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.559470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.559778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.559785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.560079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.560089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.560278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.560286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.560445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.560452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.560765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.560773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.561040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.561049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.561354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.561362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.561675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.561684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.562002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.562010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.562344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.562353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.562518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.562526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.562836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.562844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.563146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.563155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.563332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.563340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.563652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.563660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.563974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.563982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.095 qpair failed and we were unable to recover it. 00:38:19.095 [2024-06-07 14:40:42.564168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.095 [2024-06-07 14:40:42.564177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.564344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.564352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.564670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.564678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.564990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.564999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.565334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.565342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.565663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.565673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.565828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.565838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.566019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.566028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.566249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.566257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.566417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.566425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.566716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.566723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.567036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.567044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.567357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.567366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.567705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.567712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.568023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.568030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.568196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.568203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.568495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.568503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.568820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.568828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.569013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.569020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.569313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.569322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.569496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.569502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.569813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.569823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.569973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.569981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.570142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.570150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.570462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.570471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.570627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.570635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.570973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.570982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.571296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.571304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.571473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.571480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.571791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.571799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.571952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.571960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.572244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.572253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.572586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.572594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.572877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.572886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.573072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.573081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.573383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.573391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.573745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.573754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.574045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.574053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.096 [2024-06-07 14:40:42.574249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.096 [2024-06-07 14:40:42.574258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.096 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.574545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.574552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.574739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.574748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.575046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.575055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.575305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.575312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.575654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.575662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.575976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.575984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.576269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.576278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.576614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.576622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.576938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.576946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.577136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.577144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.577458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.577465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.577648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.577656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.577990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.577999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.578308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.578317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.578654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.578663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.578930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.578937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.579247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.579255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.579433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.579441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.579613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.579620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.579930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.579938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.580215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.580223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.580518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.580525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.580704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.580712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.580984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.580993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.581228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.581236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.581534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.581542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.581734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.581741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.582030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.582038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.582347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.582356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.582555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.582562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.582717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.097 [2024-06-07 14:40:42.582724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.097 qpair failed and we were unable to recover it. 00:38:19.097 [2024-06-07 14:40:42.583019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.583027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.583214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.583222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.583547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.583555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.583709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.583716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.584027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.584035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.584355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.584364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.584645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.584653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.584968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.584977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.585294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.585302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.585463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.585470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.585809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.585817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.586003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.586010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.586166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.586174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.586498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.586507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.586804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.586812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.587120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.587130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.587452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.587460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.587618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.587626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.587954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.587963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.588150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.588383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.588391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.588573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.588580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.588764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.588772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.589165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.589269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.589547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.589580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.589822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.589857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.590095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.590127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2174000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.590211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.590220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.590522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.590530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.590722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.590729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.590898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.590907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.591066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.591075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.591251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.591259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.591587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.591595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.591773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.591780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.592041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.592050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.592334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.592342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.592653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.098 [2024-06-07 14:40:42.592661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.098 qpair failed and we were unable to recover it. 00:38:19.098 [2024-06-07 14:40:42.592850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.592857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.593169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.593177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.593488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.593496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.593712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.593720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.594025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.594033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.594344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.594352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.594688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.594697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.595014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.595023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.595063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.595068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.595359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.595367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.595702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.595710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.595985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.595994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.596290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.596299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.596634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.596642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.596817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.596825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.597119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.597127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.597438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.597446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.597720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.597729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.597917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.597925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.598091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.598097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.598322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.598330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.598660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.598668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.598907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.598914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.599254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.599263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.599590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.599599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.599950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.599958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.600284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.600293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.600631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.600639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.600972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.600980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.601278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.601287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.601622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.601629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.601927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.601936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.602014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.602021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.602160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.602169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.602473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.602482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.099 qpair failed and we were unable to recover it. 00:38:19.099 [2024-06-07 14:40:42.602792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.099 [2024-06-07 14:40:42.602801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.603144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.603152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.603466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.603475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.603658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.603665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.603958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.603966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.604260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.604268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.604638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.604645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.604957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.604965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.605154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.605162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.605483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.605492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.605805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.605813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.606123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.606132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.606445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.606453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.606613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.606621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.606913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.606921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.607236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.607245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.607577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.607585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.607935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.607945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.608096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.608103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.608290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.608298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.608523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.608531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.608709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.608717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.608932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.608941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.609262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.609270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.609680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.609688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.609994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.610002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.610168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.610176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.610530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.610538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.610870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.610878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.611218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.611225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.611460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.611468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.611792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.611800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.611961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.611970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.612252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.612260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.612547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.612556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.612746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.612755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.612922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.612931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.613238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.613246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.100 [2024-06-07 14:40:42.613572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.100 [2024-06-07 14:40:42.613579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.100 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.613902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.613909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.614143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.614150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.614451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.614459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.614771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.614780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.614938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.614948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.615266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.615274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.615447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.615455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.615722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.615730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.616052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.616061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.616254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.616262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.616612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.616621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.616935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.616943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.617221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.617228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.617411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.617419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.617707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.617715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.617898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.617906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.618183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.618191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.618242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.618249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.618523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.618531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.618844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.618853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.619200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.619209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.619489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.619496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.619816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.619824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.620023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.620033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.620072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.620080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.620284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.620292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.620574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.620582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.620802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.620809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.621116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.621124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.621429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.621438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.621609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.621618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.621927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.621936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.622276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.622285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.622510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.622517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.622828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.622836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.623015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.623022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.623328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.623335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.101 [2024-06-07 14:40:42.623655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.101 [2024-06-07 14:40:42.623663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.101 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.623853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.623861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.624183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.624191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.624525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.624533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.624688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.624696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.625029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.625037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.625351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.625360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.625548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.625556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.625856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.625864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.626125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.626133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.626448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.626457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.626810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.626818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.627110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.627118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.627429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.627438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.627790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.627798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.627867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.627874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.628138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.628145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.628484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.628492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.628762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.628770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.629108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.629117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.629293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.629301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.629634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.629642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.629957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.629965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.630302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.630311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.630655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.630663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.630999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.631006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.631320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.631330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.631649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.631657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.631813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.631821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.632108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.632117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.632400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.632408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.102 [2024-06-07 14:40:42.632705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.102 [2024-06-07 14:40:42.632713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.102 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.632951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.632959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.633244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.633253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.633537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.633544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.633727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.633735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.634022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.634030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.634208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.634216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.634517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.634524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.634813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.634821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.635153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.635161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.635474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.635483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.635762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.635769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.636085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.636093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.636414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.636423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.636606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.636614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.636771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.636779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.636860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.636868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.637030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.637038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.637227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.637235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.637570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.637578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.637617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.637624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.637819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.637826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.638020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.638028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.638309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.638317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.638651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.638659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.638812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.638821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.639234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.639242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.639573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.639582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.639923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.639931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.640119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.640127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.640437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.640445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.640634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.640642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.640847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.640855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.641155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.641164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.641202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.641210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.641431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.641442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.641737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.641746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.642067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.642076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.103 [2024-06-07 14:40:42.642369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.103 [2024-06-07 14:40:42.642378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.103 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.642546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.642554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.642824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.642833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.643144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.643153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.643442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.643451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.643605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.643614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.643891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.643900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.644089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.644098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.644385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.644394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.644717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.644726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.645034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.645042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.645112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.645120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.645388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.645397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.645710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.645719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.645917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.645926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.646244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.646252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.646563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.646571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.646615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.646622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.646797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.646804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.647023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.647031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.647412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.647421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.647603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.647612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.647903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.647911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.648250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.648259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.648586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.648594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.648889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.648898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.649222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.649231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.649551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.649560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.649854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.649863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.650065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.650073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.650277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.650285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.650556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.650564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.650750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.650757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.651061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.651069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.651224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.651232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.651512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.651521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.651786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.651794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.651838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.651848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.104 qpair failed and we were unable to recover it. 00:38:19.104 [2024-06-07 14:40:42.652057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.104 [2024-06-07 14:40:42.652065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.652379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.652388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.652706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.652713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.652913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.652922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.653086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.653095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.653433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.653441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.653781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.653790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.654100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.654109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.654275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.654284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.654453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.654462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.654502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.654510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.654686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.654695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.655021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.655030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.655366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.655375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.655529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.655537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.655729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.655736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.656059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.656067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.656369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.656377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.656660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.656668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.656987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.656995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.657301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.657309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.657620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.657629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.657940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.657948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.658362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.658370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.658708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.658716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.659022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.659030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.659272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.659280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.659550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.659558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.659779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.659787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.659971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.659978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.660288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.660296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.660622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.660630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.660942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.660951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.661139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.661147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.661421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.661429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.661646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.661655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.661951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.661958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.662292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.662301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.105 [2024-06-07 14:40:42.662506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.105 [2024-06-07 14:40:42.662514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.105 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.662778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.662786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.663143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.663152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.663488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.663496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.663653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.663661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.663941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.663950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.664274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.664282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.664329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.664335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.664636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.664644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.664986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.664993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.665187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.665199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.665324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.665331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.665508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.665516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.665707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.665715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.665973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.665981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.666159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.666168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.666493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.666502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.666784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.666791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.667097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.667105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.667273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.667282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.667483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.667492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.667675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.667683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.668005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.668013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.668304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.668312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.668524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.668532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.668846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.668854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.669048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.669057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.669265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.669272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.669421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.669431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.669771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.669779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.669935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.669943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.670098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.670105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.670278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.670286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.670464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.106 [2024-06-07 14:40:42.670473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.106 qpair failed and we were unable to recover it. 00:38:19.106 [2024-06-07 14:40:42.670656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.670664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.671006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.671013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.671242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.671250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.671571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.671580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.671898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.671906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.672240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.672249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.672576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.672584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.672761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.672769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.672852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.672860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.673013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.673020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.673237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.673246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.673550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.673558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.673857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.673866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.674073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.674081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.674342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.674351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.674779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.674788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.674979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.674988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.675166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.675175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.675504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.675515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.675636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.675644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.675970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.675979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.676185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.676196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.676392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.676400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.676713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.676721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.677033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.677041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.677228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.677235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.677427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.677436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.677632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.677640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.677961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.677969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.678284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.678292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.678622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.678629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.678973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.678981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.679032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.679237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.679244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.679492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.679502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.679687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.679696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.679874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.679881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.680059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.107 [2024-06-07 14:40:42.680067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.107 qpair failed and we were unable to recover it. 00:38:19.107 [2024-06-07 14:40:42.680432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.680440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.680711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.680720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.680995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.681003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.681189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.681199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.681399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.681408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.681701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.681708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.681890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.681897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.682187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.682197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.682511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.682519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.682789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.682797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.682985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.682994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.683189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.683205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.683524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.683533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.683836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.683844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.683988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.683995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.684299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.684308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.684615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.684622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.684803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.684811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.685161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.685169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.685214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.685221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.685445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.685452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.685766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.685774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.686044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.686052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.686363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.686371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.686692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.686699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.686900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.686908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.687234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.687242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.687547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.687556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.687736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.687744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.688045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.688053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.688273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.688281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.688558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.688565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.688755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.688762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.688918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.688925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.689151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.689158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.689479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.689487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.689795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.108 [2024-06-07 14:40:42.689804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.108 qpair failed and we were unable to recover it. 00:38:19.108 [2024-06-07 14:40:42.689879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.689886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.690101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.690108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.690334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.690341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.690610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.690619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.690825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.690834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.691158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.691167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.691488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.691496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.691814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.691822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.692124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.692132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.692450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.692459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.692756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.692765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.693094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.693103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.693311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.693321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.693656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.693665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.693983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.693992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.694299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.694306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.694679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.694687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.694884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.694891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.694942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.694948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.695232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.695239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.695437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.695445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.695763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.695771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.696101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.696109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.696417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.696425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.696597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.696604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.696896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.696904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.697081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.697088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.697414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.697423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.697713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.697720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.698036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.698045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.698370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.698378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.698696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.698705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.699000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.699007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.699323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.699331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.699521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.699529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.699858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.699867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.700049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.700058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.700220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.700229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.109 qpair failed and we were unable to recover it. 00:38:19.109 [2024-06-07 14:40:42.700421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.109 [2024-06-07 14:40:42.700429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.700738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.700747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.701083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.701091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.701271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.701278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.701536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.701544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.701758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.701765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.702083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.702091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.702253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.702260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.702459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.702467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.702640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.702648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.702938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.702946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.703203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.703211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.703516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.703524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.703846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.703854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.704237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.704245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.704529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.704537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.704687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.704696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.704867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.704875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.705096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.705104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.705306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.705314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.705645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.705653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.705968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.705976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.706015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.706022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.706210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.706219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.706404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.706412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.706743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.706752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.707055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.707063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.707244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.707252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.707541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.707550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.707719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.707727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.110 [2024-06-07 14:40:42.708055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.110 [2024-06-07 14:40:42.708063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.110 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.708354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.708364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.708704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.708712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.709021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.709029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.709073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.709080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.709391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.709400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.709735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.709742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.709924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.709931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.710123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.710132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.710310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.710319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.710674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.710681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.710876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.710889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:19.380 [2024-06-07 14:40:42.711205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.711215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.711393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.711400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:38:19.380 [2024-06-07 14:40:42.711578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.711587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.380 qpair failed and we were unable to recover it. 00:38:19.380 [2024-06-07 14:40:42.711762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.380 [2024-06-07 14:40:42.711770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:19.381 [2024-06-07 14:40:42.711951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.711961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:19.381 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.381 [2024-06-07 14:40:42.712372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.712382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.712687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.712696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.712883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.712892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.713184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.713196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.713521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.713529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.713708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.713716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.713934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.713942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.714285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.714294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.714524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.714533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.714713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.714720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.715015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.715024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.715200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.715209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.715389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.715397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.715711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.715719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.716051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.716059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.716370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.716379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.716682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.716691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.716877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.716884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.717054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.717063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.717253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.717262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.717549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.717557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.717741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.717749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.717919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.717928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.718106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.718115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.718430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.718439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.718645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.718654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.718976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.718984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.719279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.719288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.719450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.719457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.719627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.719635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.719964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.719972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.720260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.720268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.720591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.720601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.720865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.720873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.721188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.721198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.381 qpair failed and we were unable to recover it. 00:38:19.381 [2024-06-07 14:40:42.721527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.381 [2024-06-07 14:40:42.721536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.721719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.721727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.722043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.722051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.722374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.722382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.722718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.722726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.722910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.722918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.723087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.723095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.723263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.723271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.723315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.723323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.723608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.723616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.723922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.723931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.724216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.724225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.724400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.724408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.724711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.724719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.724898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.724906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.725236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.725244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.725511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.725519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.725801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.725809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.726122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.726131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.726325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.726333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.726368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.726375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.726535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.726544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.726695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.726704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.727002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.727011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.727212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.727221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.727517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.727525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.727870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.727878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.728083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.728092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.728369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.728377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.728716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.728724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.729033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.729041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.729358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.729366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.729559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.729567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.729892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.729901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.730219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.730228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.730416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.730423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.730581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.730588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.730791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.730801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.382 qpair failed and we were unable to recover it. 00:38:19.382 [2024-06-07 14:40:42.730960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.382 [2024-06-07 14:40:42.730970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.731275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.731283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.731595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.731604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.731898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.731905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.732082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.732091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.732393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.732401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.732735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.732744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.732932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.732940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.733255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.733265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.733588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.733596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.733908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.733916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.734209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.734218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.734526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.734535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.734847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.734856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.735169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.735177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.735472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.735479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.735806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.735814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.736128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.736137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.736368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.736376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.736701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.736710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.737019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.737027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.737214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.737222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.737510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.737518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.737828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.737835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.738027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.738035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.738291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.738298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.738487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.738496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.738658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.738667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.738824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.738834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.739117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.739126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.739527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.739536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.739877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.739887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.740074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.740083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.740396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.740405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.740717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.740726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.741061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.741069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.741368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.741377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.741549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.741556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.383 qpair failed and we were unable to recover it. 00:38:19.383 [2024-06-07 14:40:42.741825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.383 [2024-06-07 14:40:42.741832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.742146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.742156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.742341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.742350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.742641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.742649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.742979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.742987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.743310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.743319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.743643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.743651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.743829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.743836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.744137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.744145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.744459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.744467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.744505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.744513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.744791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.744800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.745137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.745145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.745294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.745301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.745697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.745706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.746000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.746009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.746330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.746339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.746656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.746665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.746975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.746983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.747294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.747303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.747614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.747622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.747914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.747923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.748232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.748240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.748571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.748579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.748883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.748892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.749266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.749274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.749589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.749598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.749947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.749955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.750304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.750312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.750536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.750544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.750683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.750691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.750860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.750867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.751137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.751145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.384 [2024-06-07 14:40:42.751316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.751325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:19.384 [2024-06-07 14:40:42.751666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.751675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.384 [2024-06-07 14:40:42.751953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.751962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 [2024-06-07 14:40:42.752103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.384 [2024-06-07 14:40:42.752111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.384 qpair failed and we were unable to recover it. 00:38:19.384 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.385 [2024-06-07 14:40:42.752393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.752402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.752563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.752570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.752886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.752894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.753193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.753212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.753372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.753380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.753655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.753662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.753874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.753882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.753915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.753922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.754220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.754228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.754560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.754567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.754745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.754752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.755080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.755088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.755308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.755316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.755601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.755609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.755912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.755921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.756231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.756240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.756412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.756420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.756733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.756741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.757051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.757058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.757368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.757377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.757704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.757711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.757884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.757892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.758058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.758066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.758380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.758389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.758599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.758607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.758934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.758942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.759208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.759215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.759492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.759500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.759828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.759836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.760128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.760138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.760180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.385 [2024-06-07 14:40:42.760186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.385 qpair failed and we were unable to recover it. 00:38:19.385 [2024-06-07 14:40:42.760488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.760496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.760656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.760664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.760824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.760833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.761115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.761123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.761428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.761436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.761746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.761754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.762069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.762077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.762394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.762403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.762596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.762604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.762926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.762934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.763246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.763254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.763462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.763469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.763626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.763634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.763899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.763908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.764264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.764273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.764458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.764465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.764784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.764793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.764984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.764992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.765290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.765299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.765585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.765593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.765876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.765885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.766320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.766328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.766660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.766669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.766954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.766963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.767151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.767159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.767460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.767467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.767658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.767666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.767973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.767981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.768163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.768170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.768489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.768497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.768677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.768685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.768768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.768775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.769140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.769148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 Malloc0 00:38:19.386 [2024-06-07 14:40:42.769432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.769441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 [2024-06-07 14:40:42.769788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.769796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.386 [2024-06-07 14:40:42.770095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.770103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.386 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:19.386 [2024-06-07 14:40:42.770414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.386 [2024-06-07 14:40:42.770422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.386 qpair failed and we were unable to recover it. 00:38:19.387 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.387 [2024-06-07 14:40:42.770774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.770782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.387 [2024-06-07 14:40:42.771084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.771092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.771416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.771424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.771583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.771591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.771874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.771882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.772217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.772225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.772551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.772558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.772874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.772882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.773062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.773069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.773309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.773318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.773675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.773682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.773987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.773996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.774184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.774192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.774520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.774530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.774833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.774842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.775152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.775161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.775479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.775488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.775641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.775650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.775818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.775827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.775973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.775982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.776191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.776203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.776363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.776371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.776543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.776550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.776677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.387 [2024-06-07 14:40:42.776866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.776873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.777178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.777185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.777350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.777358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.777695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.777703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.778041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.778048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.778442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.778451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.778776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.778784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.778966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.778974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.779271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.779279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.779472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.779480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.779773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.779782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.779976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.779985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.780145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.780153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.780443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.387 [2024-06-07 14:40:42.780452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.387 qpair failed and we were unable to recover it. 00:38:19.387 [2024-06-07 14:40:42.780713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.780721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.781035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.781044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.781228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.781239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.781569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.781577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.781893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.781902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.782068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.782077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.782258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.782266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.782560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.782567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.782857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.782864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.783200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.783208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.783254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.783261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.783463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.783472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.783791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.783798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.784135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.784143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.784503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.784511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.784556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.784563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.784931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.784939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.785253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.785261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.785577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.785585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.388 [2024-06-07 14:40:42.785935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.785944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:19.388 [2024-06-07 14:40:42.786285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.786293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.388 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.388 [2024-06-07 14:40:42.786627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.786636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.786970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.786978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.787143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.787152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.787434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.787442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.787635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.787642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.787940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.787948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.788125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.788136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.788373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.788382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.788714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.788722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.788983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.788992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.789174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.789183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.789501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.789511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.789722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.789731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.789899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.789907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.790184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.388 [2024-06-07 14:40:42.790193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.388 qpair failed and we were unable to recover it. 00:38:19.388 [2024-06-07 14:40:42.790511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.790519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.790829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.790838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.791175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.791182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.791490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.791499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.791675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.791683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.791987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.791994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.792255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.792262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.792562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.792571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.792886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.792894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.793204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.793212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.793508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.793516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.793848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.793856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.794150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.794159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.794338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.794345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.794658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.794666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.794981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.794989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.795402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.795409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.795716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.795724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.796039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.796048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.796403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.796411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.796680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.796688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.797021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.797029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.797087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.797094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.797391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.797400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.797561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.797570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.389 [2024-06-07 14:40:42.797882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.797891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:19.389 [2024-06-07 14:40:42.798199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.798209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.389 [2024-06-07 14:40:42.798513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.798521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.389 [2024-06-07 14:40:42.798831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.798841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.389 [2024-06-07 14:40:42.799155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.389 [2024-06-07 14:40:42.799163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.389 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.799342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.799350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.799694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.799702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.800015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.800023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.800349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.800357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.800673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.800682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.800842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.800849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.801161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.801168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.801444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.801452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.801761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.801770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.802057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.802065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.802367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.802376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.802687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.802694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.802890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.802898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.803209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.803218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.803390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.803398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.803699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.803707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.803974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.803982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.804315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.804323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.804504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.804511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.804860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.804868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.805220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.805227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.805559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.805567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.805782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.805789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.806008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.806016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.806319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.806327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.806677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.806684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.807039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.807049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.807208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.807215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.807619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.807627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.807933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.807941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.808253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.808261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.808599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.808607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.808800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.808807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.809102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.809110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.809292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.809300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.809519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.809528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 [2024-06-07 14:40:42.809826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.390 [2024-06-07 14:40:42.809835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.390 qpair failed and we were unable to recover it. 00:38:19.390 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.390 [2024-06-07 14:40:42.810102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.810111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.810271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.810281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.391 [2024-06-07 14:40:42.810620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.810629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.391 [2024-06-07 14:40:42.810969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.810978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.811311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.811320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.811650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.811658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.811996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.812005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.812334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.812343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.812696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.812704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.813018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.813026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.813318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.813326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.813524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.813531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.813571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.813578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.813732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.813741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.814070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.814080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.814390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.814399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.814715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.814722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.815035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.815043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.815219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.815227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.815410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.815417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.815719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.815726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.815997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.816004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.816303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.816311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.816647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.816655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.816919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.391 [2024-06-07 14:40:42.816973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.391 [2024-06-07 14:40:42.816980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f217c000b90 with addr=10.0.0.2, port=4420 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.391 [2024-06-07 14:40:42.827389] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.391 [2024-06-07 14:40:42.827464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.391 [2024-06-07 14:40:42.827479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.391 [2024-06-07 14:40:42.827485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.391 [2024-06-07 14:40:42.827490] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.391 [2024-06-07 14:40:42.827505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.391 14:40:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 821943 00:38:19.391 [2024-06-07 14:40:42.837469] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.391 [2024-06-07 14:40:42.837528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.391 [2024-06-07 14:40:42.837541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.391 [2024-06-07 14:40:42.837547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.391 [2024-06-07 14:40:42.837551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.391 [2024-06-07 14:40:42.837563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.847460] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.391 [2024-06-07 14:40:42.847508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.391 [2024-06-07 14:40:42.847521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.391 [2024-06-07 14:40:42.847526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.391 [2024-06-07 14:40:42.847530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.391 [2024-06-07 14:40:42.847541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.391 qpair failed and we were unable to recover it. 00:38:19.391 [2024-06-07 14:40:42.857325] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.391 [2024-06-07 14:40:42.857379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.857391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.857396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.857401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.857412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.867436] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.867492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.867505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.867511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.867515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.867526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.877423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.877475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.877487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.877492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.877496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.877507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.887478] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.887559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.887571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.887576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.887581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.887591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.897554] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.897603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.897615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.897620] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.897624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.897635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.907403] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.907471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.907483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.907489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.907493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.907506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.917433] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.917487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.917499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.917504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.917509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.917519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.927555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.927598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.927611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.927615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.927620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.927630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.937499] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.937560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.937572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.937577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.937581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.937592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.947711] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.947813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.947827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.947833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.947839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.947851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.957694] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.957739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.957755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.957760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.957765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.957777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.967723] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.967805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.967817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.967822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.967827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.967837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.977727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.977777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.977789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.977794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.977798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.977809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.392 qpair failed and we were unable to recover it. 00:38:19.392 [2024-06-07 14:40:42.987773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.392 [2024-06-07 14:40:42.987831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.392 [2024-06-07 14:40:42.987843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.392 [2024-06-07 14:40:42.987848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.392 [2024-06-07 14:40:42.987852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.392 [2024-06-07 14:40:42.987863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.393 qpair failed and we were unable to recover it. 00:38:19.393 [2024-06-07 14:40:42.997791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.393 [2024-06-07 14:40:42.997856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.393 [2024-06-07 14:40:42.997868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.393 [2024-06-07 14:40:42.997873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.393 [2024-06-07 14:40:42.997881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.393 [2024-06-07 14:40:42.997891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.393 qpair failed and we were unable to recover it. 00:38:19.393 [2024-06-07 14:40:43.007834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.393 [2024-06-07 14:40:43.007888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.393 [2024-06-07 14:40:43.007900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.393 [2024-06-07 14:40:43.007905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.393 [2024-06-07 14:40:43.007910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.393 [2024-06-07 14:40:43.007921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.393 qpair failed and we were unable to recover it. 00:38:19.393 [2024-06-07 14:40:43.017767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.393 [2024-06-07 14:40:43.017816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.393 [2024-06-07 14:40:43.017829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.393 [2024-06-07 14:40:43.017834] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.393 [2024-06-07 14:40:43.017838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.393 [2024-06-07 14:40:43.017849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.393 qpair failed and we were unable to recover it. 00:38:19.654 [2024-06-07 14:40:43.027887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.654 [2024-06-07 14:40:43.027941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.654 [2024-06-07 14:40:43.027953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.654 [2024-06-07 14:40:43.027958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.654 [2024-06-07 14:40:43.027963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.654 [2024-06-07 14:40:43.027973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.654 qpair failed and we were unable to recover it. 00:38:19.654 [2024-06-07 14:40:43.037927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.654 [2024-06-07 14:40:43.037974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.654 [2024-06-07 14:40:43.037986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.654 [2024-06-07 14:40:43.037991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.654 [2024-06-07 14:40:43.037996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.654 [2024-06-07 14:40:43.038006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.654 qpair failed and we were unable to recover it. 00:38:19.654 [2024-06-07 14:40:43.047937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.654 [2024-06-07 14:40:43.047992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.654 [2024-06-07 14:40:43.048004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.654 [2024-06-07 14:40:43.048009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.654 [2024-06-07 14:40:43.048014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.654 [2024-06-07 14:40:43.048024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.654 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.057949] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.057998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.058010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.058015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.058020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.058030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.067989] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.068091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.068105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.068110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.068114] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.068124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.078147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.078208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.078221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.078226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.078230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.078241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.088103] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.088168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.088180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.088185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.088192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.088206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.098111] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.098159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.098172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.098177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.098181] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.098191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.108159] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.108220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.108233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.108238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.108243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.108254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.118151] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.118200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.118212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.118217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.118222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.118232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.128175] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.128231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.128243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.128248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.128253] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.128263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.138193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.138268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.138281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.138286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.138290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.138301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.148084] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.148144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.148157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.148163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.148169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.148182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.158241] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.158320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.158332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.158338] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.158342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.158352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.168267] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.168362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.168373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.168378] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.168383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.168394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.178282] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.178341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.178353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.655 [2024-06-07 14:40:43.178364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.655 [2024-06-07 14:40:43.178368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.655 [2024-06-07 14:40:43.178379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.655 qpair failed and we were unable to recover it. 00:38:19.655 [2024-06-07 14:40:43.188311] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.655 [2024-06-07 14:40:43.188364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.655 [2024-06-07 14:40:43.188376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.188380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.188385] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.188396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.198354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.198444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.198458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.198463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.198467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.198478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.208391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.208438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.208450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.208456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.208460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.208471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.218455] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.218504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.218516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.218521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.218526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.218536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.228324] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.228378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.228390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.228395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.228400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.228410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.238369] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.238419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.238430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.238435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.238440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.238450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.248498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.248541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.248553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.248558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.248563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.248574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.258563] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.258612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.258625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.258630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.258635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.258645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.268562] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.268613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.268628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.268634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.268638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.268648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.278452] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.278502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.278514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.278519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.278523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.278533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.288586] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.288636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.288647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.288652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.288657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.288667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.656 [2024-06-07 14:40:43.298642] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.656 [2024-06-07 14:40:43.298694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.656 [2024-06-07 14:40:43.298707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.656 [2024-06-07 14:40:43.298712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.656 [2024-06-07 14:40:43.298716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.656 [2024-06-07 14:40:43.298727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.656 qpair failed and we were unable to recover it. 00:38:19.919 [2024-06-07 14:40:43.308529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.919 [2024-06-07 14:40:43.308591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.919 [2024-06-07 14:40:43.308603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.919 [2024-06-07 14:40:43.308608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.919 [2024-06-07 14:40:43.308612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.919 [2024-06-07 14:40:43.308626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.919 qpair failed and we were unable to recover it. 00:38:19.919 [2024-06-07 14:40:43.318648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.919 [2024-06-07 14:40:43.318732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.919 [2024-06-07 14:40:43.318744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.919 [2024-06-07 14:40:43.318749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.919 [2024-06-07 14:40:43.318754] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.919 [2024-06-07 14:40:43.318764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.919 qpair failed and we were unable to recover it. 00:38:19.919 [2024-06-07 14:40:43.328576] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.919 [2024-06-07 14:40:43.328622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.919 [2024-06-07 14:40:43.328634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.919 [2024-06-07 14:40:43.328639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.919 [2024-06-07 14:40:43.328643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.919 [2024-06-07 14:40:43.328653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.919 qpair failed and we were unable to recover it. 00:38:19.919 [2024-06-07 14:40:43.338687] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.919 [2024-06-07 14:40:43.338742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.919 [2024-06-07 14:40:43.338753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.919 [2024-06-07 14:40:43.338758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.919 [2024-06-07 14:40:43.338763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.919 [2024-06-07 14:40:43.338773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.919 qpair failed and we were unable to recover it. 00:38:19.919 [2024-06-07 14:40:43.348771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.919 [2024-06-07 14:40:43.348826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.919 [2024-06-07 14:40:43.348838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.348843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.348847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.348857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.358790] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.358834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.358849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.358854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.358858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.358869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.368808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.368862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.368874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.368879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.368883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.368894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.378850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.378908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.378926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.378932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.378937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.378950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.388902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.388951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.388964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.388969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.388974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.388985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.398878] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.398929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.398941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.398946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.398953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.398964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.408924] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.409011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.409024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.409030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.409035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.409045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.418987] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.419083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.419102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.419108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.419113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.419126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.428975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.429060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.429079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.429085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.429090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.429104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.438890] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.438944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.438957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.438963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.438967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.438978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.449065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.449116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.449129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.449133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.449138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.449148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.459074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.459126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.459138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.459143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.459148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.459158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.469085] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.469138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.469149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.469154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.469159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.469169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.920 qpair failed and we were unable to recover it. 00:38:19.920 [2024-06-07 14:40:43.479117] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.920 [2024-06-07 14:40:43.479162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.920 [2024-06-07 14:40:43.479175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.920 [2024-06-07 14:40:43.479179] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.920 [2024-06-07 14:40:43.479184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.920 [2024-06-07 14:40:43.479197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.489161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.489216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.489228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.489233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.489240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.489252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.499174] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.499234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.499246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.499251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.499256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.499266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.509234] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.509288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.509300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.509305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.509309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.509320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.519250] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.519338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.519351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.519357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.519362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.519374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.529256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.529305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.529317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.529322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.529327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.529337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.539305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.539355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.539367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.539372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.539376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.539386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.549319] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.549377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.549389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.549394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.549399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.549409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:19.921 [2024-06-07 14:40:43.559327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:19.921 [2024-06-07 14:40:43.559373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:19.921 [2024-06-07 14:40:43.559386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:19.921 [2024-06-07 14:40:43.559391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:19.921 [2024-06-07 14:40:43.559395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:19.921 [2024-06-07 14:40:43.559406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:19.921 qpair failed and we were unable to recover it. 00:38:20.184 [2024-06-07 14:40:43.569301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.184 [2024-06-07 14:40:43.569346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.184 [2024-06-07 14:40:43.569358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.184 [2024-06-07 14:40:43.569363] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.184 [2024-06-07 14:40:43.569368] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.184 [2024-06-07 14:40:43.569378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-06-07 14:40:43.579409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.184 [2024-06-07 14:40:43.579458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.184 [2024-06-07 14:40:43.579470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.184 [2024-06-07 14:40:43.579478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.184 [2024-06-07 14:40:43.579483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.184 [2024-06-07 14:40:43.579493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-06-07 14:40:43.589323] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.184 [2024-06-07 14:40:43.589378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.184 [2024-06-07 14:40:43.589391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.184 [2024-06-07 14:40:43.589396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.184 [2024-06-07 14:40:43.589400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.184 [2024-06-07 14:40:43.589411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-06-07 14:40:43.599445] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.184 [2024-06-07 14:40:43.599505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.184 [2024-06-07 14:40:43.599518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.184 [2024-06-07 14:40:43.599523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.184 [2024-06-07 14:40:43.599527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.184 [2024-06-07 14:40:43.599538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-06-07 14:40:43.609496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.609541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.609553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.609558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.609563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.609574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.619523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.619571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.619584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.619589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.619594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.619604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.629555] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.629607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.629619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.629624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.629629] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.629639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.639498] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.639597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.639610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.639615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.639620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.639630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.649612] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.649659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.649672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.649677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.649681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.649692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.659505] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.659553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.659565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.659571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.659575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.659586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.669671] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.669723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.669737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.669743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.669747] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.669758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.679611] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.679659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.679671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.679676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.679681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.679691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.689706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.689753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.689765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.689770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.689775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.689785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.699755] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.699806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.699818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.699822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.699827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.699837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.709773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.709825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.709837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.709842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.709847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.709860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.719820] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.719867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.719879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.719884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.719889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.719899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.729795] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.185 [2024-06-07 14:40:43.729842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.185 [2024-06-07 14:40:43.729855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.185 [2024-06-07 14:40:43.729859] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.185 [2024-06-07 14:40:43.729864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.185 [2024-06-07 14:40:43.729874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-06-07 14:40:43.739866] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.739922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.739934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.739939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.739943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.739954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.749899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.749954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.749966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.749971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.749976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.749986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.759778] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.759827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.759841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.759846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.759850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.759860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.769829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.769880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.769892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.769897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.769902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.769912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.779965] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.780023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.780035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.780040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.780044] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.780055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.790004] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.790059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.790078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.790084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.790089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.790102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.800023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.800071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.800085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.800090] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.800094] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.800109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.809920] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.809970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.809982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.809987] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.809992] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.810002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.820080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.820137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.820149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.186 [2024-06-07 14:40:43.820154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.186 [2024-06-07 14:40:43.820158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.186 [2024-06-07 14:40:43.820169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-06-07 14:40:43.830011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.186 [2024-06-07 14:40:43.830066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.186 [2024-06-07 14:40:43.830078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.482 [2024-06-07 14:40:43.830084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.482 [2024-06-07 14:40:43.830090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.482 [2024-06-07 14:40:43.830100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.482 qpair failed and we were unable to recover it. 00:38:20.482 [2024-06-07 14:40:43.840169] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.482 [2024-06-07 14:40:43.840247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.482 [2024-06-07 14:40:43.840259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.482 [2024-06-07 14:40:43.840264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.482 [2024-06-07 14:40:43.840269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.482 [2024-06-07 14:40:43.840280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.482 qpair failed and we were unable to recover it. 00:38:20.482 [2024-06-07 14:40:43.850155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.482 [2024-06-07 14:40:43.850210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.482 [2024-06-07 14:40:43.850222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.482 [2024-06-07 14:40:43.850227] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.482 [2024-06-07 14:40:43.850231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.482 [2024-06-07 14:40:43.850242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.482 qpair failed and we were unable to recover it. 00:38:20.482 [2024-06-07 14:40:43.860060] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.860108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.860120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.860125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.860129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.860140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.870254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.870331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.870344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.870348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.870353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.870363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.880236] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.880284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.880296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.880301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.880305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.880315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.890274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.890326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.890338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.890343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.890350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.890361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.900292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.900339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.900351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.900356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.900361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.900371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.910338] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.910393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.910406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.910411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.910416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.910427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.920358] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.920404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.920416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.920421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.920426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.920437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.930252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.930297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.930309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.930314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.930318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.930329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.940454] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.940509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.940521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.940526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.940531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.940541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.950451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.950503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.950514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.950519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.950524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.950534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.960464] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.960524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.960536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.960541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.960545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.960555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.970521] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.970608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.970620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.970626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.970630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.970640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.980577] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.980656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.980668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.980679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.980683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.483 [2024-06-07 14:40:43.980694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.483 qpair failed and we were unable to recover it. 00:38:20.483 [2024-06-07 14:40:43.990474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.483 [2024-06-07 14:40:43.990526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.483 [2024-06-07 14:40:43.990538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.483 [2024-06-07 14:40:43.990543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.483 [2024-06-07 14:40:43.990547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:43.990558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.000610] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.000659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.000672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.000677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.000681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.000691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.010516] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.010612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.010625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.010630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.010635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.010647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.020648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.020696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.020708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.020714] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.020718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.020728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.030682] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.030734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.030747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.030752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.030757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.030767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.040693] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.040736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.040748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.040753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.040757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.040767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.050728] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.050776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.050789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.050794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.050798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.050809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.060734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.060781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.060793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.060799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.060803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.060814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.070816] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.070898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.070910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.070919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.070924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.070934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.080772] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.080815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.080827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.080832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.080836] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.080846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.090832] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.090878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.090890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.090895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.090900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.090909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.100859] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.100911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.100930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.100936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.100940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.100954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.110902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.110958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.110976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.110982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.110986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.111000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.484 [2024-06-07 14:40:44.120928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.484 [2024-06-07 14:40:44.121018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.484 [2024-06-07 14:40:44.121032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.484 [2024-06-07 14:40:44.121037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.484 [2024-06-07 14:40:44.121042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.484 [2024-06-07 14:40:44.121053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.484 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.130937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.130987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.131006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.131012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.131017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.131031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.140993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.141081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.141095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.141100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.141105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.141116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.151012] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.151065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.151078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.151083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.151087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.151098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.161023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.161069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.161086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.161091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.161096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.161107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.170947] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.171003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.171016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.171021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.171025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.171036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.181124] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.181174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.181187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.181192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.181200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.181211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.191099] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.191192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.191209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.191214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.191219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.191229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.201137] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.201189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.201206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.201211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.201215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.201231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.211187] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.748 [2024-06-07 14:40:44.211242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.748 [2024-06-07 14:40:44.211255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.748 [2024-06-07 14:40:44.211260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.748 [2024-06-07 14:40:44.211264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.748 [2024-06-07 14:40:44.211275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.748 qpair failed and we were unable to recover it. 00:38:20.748 [2024-06-07 14:40:44.221223] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.221273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.221285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.221291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.221295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.221306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.231243] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.231296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.231308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.231313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.231318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.231328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.241270] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.241316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.241329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.241334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.241338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.241349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.251184] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.251320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.251335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.251340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.251345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.251356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.261319] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.261365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.261378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.261383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.261388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.261398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.271360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.271413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.271426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.271431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.271435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.271447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.281320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.281373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.281385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.281391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.281395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.281406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.291431] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.291512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.291524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.291529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.291537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.291548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.301425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.301482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.301494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.301499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.301504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.301514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.311353] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.311410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.311422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.311427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.311432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.311442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.321365] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.321411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.321423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.321428] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.321433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.321443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.331493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.331541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.331553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.331559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.331563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.331573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.341444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.341501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.341512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.341517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.749 [2024-06-07 14:40:44.341521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.749 [2024-06-07 14:40:44.341531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.749 qpair failed and we were unable to recover it. 00:38:20.749 [2024-06-07 14:40:44.351584] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.749 [2024-06-07 14:40:44.351637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.749 [2024-06-07 14:40:44.351649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.749 [2024-06-07 14:40:44.351654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.750 [2024-06-07 14:40:44.351658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.750 [2024-06-07 14:40:44.351668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.750 qpair failed and we were unable to recover it. 00:38:20.750 [2024-06-07 14:40:44.361605] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.750 [2024-06-07 14:40:44.361651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.750 [2024-06-07 14:40:44.361663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.750 [2024-06-07 14:40:44.361668] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.750 [2024-06-07 14:40:44.361672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.750 [2024-06-07 14:40:44.361683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.750 qpair failed and we were unable to recover it. 00:38:20.750 [2024-06-07 14:40:44.371671] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.750 [2024-06-07 14:40:44.371720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.750 [2024-06-07 14:40:44.371732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.750 [2024-06-07 14:40:44.371737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.750 [2024-06-07 14:40:44.371742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.750 [2024-06-07 14:40:44.371752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.750 qpair failed and we were unable to recover it. 00:38:20.750 [2024-06-07 14:40:44.381668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.750 [2024-06-07 14:40:44.381713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.750 [2024-06-07 14:40:44.381725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.750 [2024-06-07 14:40:44.381733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.750 [2024-06-07 14:40:44.381737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.750 [2024-06-07 14:40:44.381747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.750 qpair failed and we were unable to recover it. 00:38:20.750 [2024-06-07 14:40:44.391699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:20.750 [2024-06-07 14:40:44.391750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:20.750 [2024-06-07 14:40:44.391762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:20.750 [2024-06-07 14:40:44.391767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:20.750 [2024-06-07 14:40:44.391771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:20.750 [2024-06-07 14:40:44.391782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:20.750 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.401686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.401738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.401750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.401755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.401760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.401770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.411719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.411809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.411821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.411826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.411831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.411842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.421771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.421819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.421831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.421836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.421840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.421850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.431785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.431838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.431850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.431855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.431859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.431869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.441685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.441729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.441742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.441747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.441751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.441762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.451829] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.451923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.451935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.451940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.451945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.451955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.461877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.461924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.461936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.461941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.461946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.461956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.471898] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.471949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.471961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.471969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.471973] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.471984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.481913] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.481957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.481968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.481973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.013 [2024-06-07 14:40:44.481978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.013 [2024-06-07 14:40:44.481988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.013 qpair failed and we were unable to recover it. 00:38:21.013 [2024-06-07 14:40:44.491857] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.013 [2024-06-07 14:40:44.491909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.013 [2024-06-07 14:40:44.491921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.013 [2024-06-07 14:40:44.491926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.491930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.491941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.501855] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.501909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.501921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.501926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.501931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.501941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.511981] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.512037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.512055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.512061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.512066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.512080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.522011] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.522058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.522071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.522077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.522081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.522092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.532070] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.532157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.532170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.532175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.532179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.532189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.542080] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.542137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.542149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.542154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.542158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.542168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.552110] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.552160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.552172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.552177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.552182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.552192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.562153] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.562205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.562220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.562225] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.562230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.562240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.572141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.572190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.572204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.572209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.572214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.572224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.582183] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.582239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.582251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.582256] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.582260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.582271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.592222] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.592274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.592286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.592291] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.592295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.592305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.602224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.602272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.602284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.602289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.602293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.602309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.612271] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.612320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.612332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.612337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.612342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.612352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.622301] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.014 [2024-06-07 14:40:44.622349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.014 [2024-06-07 14:40:44.622361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.014 [2024-06-07 14:40:44.622366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.014 [2024-06-07 14:40:44.622370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.014 [2024-06-07 14:40:44.622380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.014 qpair failed and we were unable to recover it. 00:38:21.014 [2024-06-07 14:40:44.632346] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.015 [2024-06-07 14:40:44.632397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.015 [2024-06-07 14:40:44.632409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.015 [2024-06-07 14:40:44.632414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.015 [2024-06-07 14:40:44.632419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.015 [2024-06-07 14:40:44.632429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.015 qpair failed and we were unable to recover it. 00:38:21.015 [2024-06-07 14:40:44.642359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.015 [2024-06-07 14:40:44.642405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.015 [2024-06-07 14:40:44.642417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.015 [2024-06-07 14:40:44.642422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.015 [2024-06-07 14:40:44.642426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.015 [2024-06-07 14:40:44.642436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.015 qpair failed and we were unable to recover it. 00:38:21.015 [2024-06-07 14:40:44.652438] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.015 [2024-06-07 14:40:44.652510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.015 [2024-06-07 14:40:44.652525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.015 [2024-06-07 14:40:44.652530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.015 [2024-06-07 14:40:44.652535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.015 [2024-06-07 14:40:44.652545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.015 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.662418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.662470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.662482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.662487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.662491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.662502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.672437] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.672488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.672501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.672505] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.672510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.672520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.682477] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.682554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.682566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.682573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.682579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.682591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.692503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.692556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.692569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.692574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.692581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.692594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.702526] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.702576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.702589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.702594] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.702598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.702609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.712557] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.712654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.712667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.712672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.712676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.712687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.722599] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.722648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.722660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.722665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.722670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.722680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.732594] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.732640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.732651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.732657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.732661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.732672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.277 [2024-06-07 14:40:44.742564] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.277 [2024-06-07 14:40:44.742619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.277 [2024-06-07 14:40:44.742631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.277 [2024-06-07 14:40:44.742636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.277 [2024-06-07 14:40:44.742640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.277 [2024-06-07 14:40:44.742651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.277 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.752705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.752755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.752766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.752771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.752776] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.752786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.762575] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.762624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.762636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.762641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.762646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.762656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.772746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.772843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.772856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.772861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.772865] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.772875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.782767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.782822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.782835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.782840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.782847] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.782857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.792782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.792833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.792845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.792850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.792855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.792865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.802686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.802731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.802743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.802749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.802753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.802764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.812826] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.812870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.812882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.812887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.812892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.812902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.822884] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.822942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.822954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.822958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.822963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.822973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.832908] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.832961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.832973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.832978] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.832982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.832993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.842946] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.843034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.843046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.843051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.843055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.843066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.852963] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.853009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.853021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.853026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.853030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.853041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.863013] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.863062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.863074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.863079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.863084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.863094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.873003] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.873060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.873072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.278 [2024-06-07 14:40:44.873080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.278 [2024-06-07 14:40:44.873085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.278 [2024-06-07 14:40:44.873095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.278 qpair failed and we were unable to recover it. 00:38:21.278 [2024-06-07 14:40:44.883027] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.278 [2024-06-07 14:40:44.883076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.278 [2024-06-07 14:40:44.883087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.279 [2024-06-07 14:40:44.883092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.279 [2024-06-07 14:40:44.883097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.279 [2024-06-07 14:40:44.883107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.279 qpair failed and we were unable to recover it. 00:38:21.279 [2024-06-07 14:40:44.893057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.279 [2024-06-07 14:40:44.893105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.279 [2024-06-07 14:40:44.893117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.279 [2024-06-07 14:40:44.893122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.279 [2024-06-07 14:40:44.893127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.279 [2024-06-07 14:40:44.893137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.279 qpair failed and we were unable to recover it. 00:38:21.279 [2024-06-07 14:40:44.903090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.279 [2024-06-07 14:40:44.903137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.279 [2024-06-07 14:40:44.903149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.279 [2024-06-07 14:40:44.903154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.279 [2024-06-07 14:40:44.903158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.279 [2024-06-07 14:40:44.903168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.279 qpair failed and we were unable to recover it. 00:38:21.279 [2024-06-07 14:40:44.913134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.279 [2024-06-07 14:40:44.913188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.279 [2024-06-07 14:40:44.913203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.279 [2024-06-07 14:40:44.913208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.279 [2024-06-07 14:40:44.913212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.279 [2024-06-07 14:40:44.913223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.279 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.923144] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.923193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.923208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.923213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.923217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.923227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.933155] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.933230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.933242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.933247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.933252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.933262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.943179] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.943237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.943249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.943254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.943258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.943269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.953221] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.953272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.953283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.953288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.953293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.953303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.963252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.963298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.963313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.963318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.963322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.963332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.973224] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.973270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.973283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.973288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.973293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.973303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.983185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.983242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.983255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.983260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.983264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.983275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:44.993310] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:44.993364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:44.993376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:44.993381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:44.993386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:44.993397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:45.003374] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:45.003417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:45.003429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:45.003434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:45.003438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:45.003451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:45.013416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:45.013466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.540 [2024-06-07 14:40:45.013478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.540 [2024-06-07 14:40:45.013484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.540 [2024-06-07 14:40:45.013488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.540 [2024-06-07 14:40:45.013499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.540 qpair failed and we were unable to recover it. 00:38:21.540 [2024-06-07 14:40:45.023298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.540 [2024-06-07 14:40:45.023356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.023368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.023373] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.023377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.023387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.033474] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.033528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.033540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.033544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.033549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.033559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.043354] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.043413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.043425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.043430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.043434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.043444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.053503] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.053551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.053566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.053570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.053575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.053585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.063413] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.063467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.063480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.063485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.063489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.063500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.073559] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.073662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.073674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.073679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.073683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.073694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.083583] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.083660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.083672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.083677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.083681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.083692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.093591] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.093636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.093648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.093653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.093660] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.093670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.103614] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.103663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.103676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.103681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.103686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.103696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.113658] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.113707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.113720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.113725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.113729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.113740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.123695] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.123742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.123754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.123759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.123764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.123774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.133721] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.133768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.133780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.133785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.133789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.133799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.143753] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.143806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.143818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.143823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.541 [2024-06-07 14:40:45.143827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.541 [2024-06-07 14:40:45.143837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.541 qpair failed and we were unable to recover it. 00:38:21.541 [2024-06-07 14:40:45.153771] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.541 [2024-06-07 14:40:45.153820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.541 [2024-06-07 14:40:45.153832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.541 [2024-06-07 14:40:45.153837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.542 [2024-06-07 14:40:45.153841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.542 [2024-06-07 14:40:45.153851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.542 qpair failed and we were unable to recover it. 00:38:21.542 [2024-06-07 14:40:45.163788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.542 [2024-06-07 14:40:45.163833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.542 [2024-06-07 14:40:45.163845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.542 [2024-06-07 14:40:45.163850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.542 [2024-06-07 14:40:45.163854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.542 [2024-06-07 14:40:45.163865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.542 qpair failed and we were unable to recover it. 00:38:21.542 [2024-06-07 14:40:45.173783] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.542 [2024-06-07 14:40:45.173829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.542 [2024-06-07 14:40:45.173841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.542 [2024-06-07 14:40:45.173846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.542 [2024-06-07 14:40:45.173850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.542 [2024-06-07 14:40:45.173861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.542 qpair failed and we were unable to recover it. 00:38:21.542 [2024-06-07 14:40:45.183722] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.542 [2024-06-07 14:40:45.183772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.542 [2024-06-07 14:40:45.183784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.542 [2024-06-07 14:40:45.183789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.542 [2024-06-07 14:40:45.183797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.542 [2024-06-07 14:40:45.183807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.542 qpair failed and we were unable to recover it. 00:38:21.802 [2024-06-07 14:40:45.193863] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.802 [2024-06-07 14:40:45.193914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.802 [2024-06-07 14:40:45.193926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.193931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.193936] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.193946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.203902] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.203952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.203964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.203969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.203974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.203984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.213931] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.213981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.213999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.214005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.214010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.214023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.223954] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.224006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.224025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.224031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.224036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.224050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.233972] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.234026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.234044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.234050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.234055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.234069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.244025] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.244070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.244083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.244089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.244093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.244104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.254001] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.254047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.254059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.254064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.254068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.254078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.264065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.264115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.264127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.264132] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.264136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.264147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.274111] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.274161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.274173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.274181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.274186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.274199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.284121] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.284165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.284176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.284181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.284186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.284199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.294142] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.294188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.294202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.294207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.294212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.294222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.304148] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.304204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.304216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.304220] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.304225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.304236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.314078] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.314138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.314150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.314155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.314159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.803 [2024-06-07 14:40:45.314170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.803 qpair failed and we were unable to recover it. 00:38:21.803 [2024-06-07 14:40:45.324235] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.803 [2024-06-07 14:40:45.324279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.803 [2024-06-07 14:40:45.324292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.803 [2024-06-07 14:40:45.324297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.803 [2024-06-07 14:40:45.324301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.324311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.334250] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.334295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.334308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.334313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.334317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.334328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.344277] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.344326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.344339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.344344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.344348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.344359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.354292] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.354345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.354357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.354362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.354367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.354377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.364339] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.364388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.364406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.364411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.364416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.364426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.374373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.374422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.374434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.374439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.374444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.374454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.384360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.384409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.384421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.384426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.384430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.384440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.394409] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.394464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.394476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.394481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.394486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.394496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.404451] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.404496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.404508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.404514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.404519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.404533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.414479] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.414523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.414535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.414541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.414547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.414558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.424496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.424564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.424576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.424581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.424585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.424596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.434537] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.434595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.434607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.434612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.434616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.434626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:21.804 [2024-06-07 14:40:45.444493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:21.804 [2024-06-07 14:40:45.444553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:21.804 [2024-06-07 14:40:45.444570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:21.804 [2024-06-07 14:40:45.444578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.804 [2024-06-07 14:40:45.444586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:21.804 [2024-06-07 14:40:45.444603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:21.804 qpair failed and we were unable to recover it. 00:38:22.065 [2024-06-07 14:40:45.454602] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.065 [2024-06-07 14:40:45.454648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.065 [2024-06-07 14:40:45.454666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.065 [2024-06-07 14:40:45.454671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.065 [2024-06-07 14:40:45.454676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.065 [2024-06-07 14:40:45.454687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.464630] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.464683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.464695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.464700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.464705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.464715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.474655] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.474745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.474758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.474763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.474768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.474779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.484683] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.484729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.484741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.484746] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.484751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.484761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.494706] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.494751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.494763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.494768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.494773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.494786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.504717] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.504809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.504821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.504827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.504832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.504842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.514715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.514769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.514782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.514787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.514792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.514803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.524741] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.524787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.524799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.524804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.524808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.524818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.534802] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.534849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.534861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.534866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.534871] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.534882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.544834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.544888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.544901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.544906] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.544910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.544921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.554789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.554846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.554858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.554863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.554868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.554879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.564891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.564938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.564950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.564955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.564959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.564970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.574887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.574931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.574944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.574949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.574953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.066 [2024-06-07 14:40:45.574964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.066 qpair failed and we were unable to recover it. 00:38:22.066 [2024-06-07 14:40:45.584950] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.066 [2024-06-07 14:40:45.585020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.066 [2024-06-07 14:40:45.585032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.066 [2024-06-07 14:40:45.585037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.066 [2024-06-07 14:40:45.585045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.585056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.594978] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.595039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.595051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.595057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.595061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.595072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.604876] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.604925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.604937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.604942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.604947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.604957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.615045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.615133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.615147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.615153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.615157] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.615168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.625055] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.625104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.625116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.625121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.625125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.625136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.635065] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.635122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.635135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.635140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.635144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.635154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.645105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.645157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.645170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.645174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.645179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.645189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.655154] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.655200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.655213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.655218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.655222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.655233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.665050] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.665097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.665110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.665115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.665120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.665130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.675177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.675268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.675281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.675289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.675294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.675304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.685252] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.685327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.685339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.685344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.685348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.685359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.695207] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.695256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.695269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.695273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.695278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.695288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.067 [2024-06-07 14:40:45.705290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.067 [2024-06-07 14:40:45.705374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.067 [2024-06-07 14:40:45.705386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.067 [2024-06-07 14:40:45.705391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.067 [2024-06-07 14:40:45.705396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.067 [2024-06-07 14:40:45.705406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.067 qpair failed and we were unable to recover it. 00:38:22.328 [2024-06-07 14:40:45.715258] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.328 [2024-06-07 14:40:45.715304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.328 [2024-06-07 14:40:45.715317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.328 [2024-06-07 14:40:45.715322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.328 [2024-06-07 14:40:45.715327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.328 [2024-06-07 14:40:45.715338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.328 qpair failed and we were unable to recover it. 00:38:22.328 [2024-06-07 14:40:45.725344] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.328 [2024-06-07 14:40:45.725418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.328 [2024-06-07 14:40:45.725430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.328 [2024-06-07 14:40:45.725435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.328 [2024-06-07 14:40:45.725440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.328 [2024-06-07 14:40:45.725450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.328 qpair failed and we were unable to recover it. 00:38:22.328 [2024-06-07 14:40:45.735381] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.328 [2024-06-07 14:40:45.735436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.328 [2024-06-07 14:40:45.735449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.328 [2024-06-07 14:40:45.735454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.328 [2024-06-07 14:40:45.735459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.328 [2024-06-07 14:40:45.735470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.328 qpair failed and we were unable to recover it. 00:38:22.328 [2024-06-07 14:40:45.745425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.745476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.745488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.745494] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.745498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.745509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.755392] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.755478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.755491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.755495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.755500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.755511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.765458] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.765500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.765512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.765520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.765525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.765536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.775486] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.775530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.775542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.775547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.775552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.775562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.785513] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.785562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.785575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.785580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.785584] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.785594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.795484] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.795532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.795544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.795549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.795553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.795564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.805570] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.805619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.805631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.805636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.805640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.805650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.815588] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.815634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.815646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.815651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.815656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.815666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.825520] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.825570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.825582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.825587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.825591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.825602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.835575] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.835622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.835634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.835639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.835644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.835654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.845659] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.845706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.845718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.845723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.845728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.845738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.855686] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.855763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.855777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.855782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.855787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.855798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.865601] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.865656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.329 [2024-06-07 14:40:45.865668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.329 [2024-06-07 14:40:45.865674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.329 [2024-06-07 14:40:45.865678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.329 [2024-06-07 14:40:45.865689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.329 qpair failed and we were unable to recover it. 00:38:22.329 [2024-06-07 14:40:45.875738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.329 [2024-06-07 14:40:45.875783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.875795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.875800] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.875805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.875815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.885673] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.885726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.885738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.885743] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.885748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.885759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.895791] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.895872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.895884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.895889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.895895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.895908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.905819] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.905866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.905878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.905884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.905888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.905899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.915785] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.915835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.915847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.915852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.915857] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.915868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.925893] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.925941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.925953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.925958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.925962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.925973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.935983] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.936039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.936051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.936056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.936061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.936071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.945995] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.946059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.946074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.946079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.946083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.946093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.955937] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.955991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.956003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.956008] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.956013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.956023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.330 [2024-06-07 14:40:45.965979] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.330 [2024-06-07 14:40:45.966024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.330 [2024-06-07 14:40:45.966037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.330 [2024-06-07 14:40:45.966042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.330 [2024-06-07 14:40:45.966046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.330 [2024-06-07 14:40:45.966056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.330 qpair failed and we were unable to recover it. 00:38:22.591 [2024-06-07 14:40:45.976030] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.591 [2024-06-07 14:40:45.976077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.591 [2024-06-07 14:40:45.976089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.591 [2024-06-07 14:40:45.976094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.591 [2024-06-07 14:40:45.976098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.591 [2024-06-07 14:40:45.976109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.591 qpair failed and we were unable to recover it. 00:38:22.591 [2024-06-07 14:40:45.986050] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.591 [2024-06-07 14:40:45.986102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.591 [2024-06-07 14:40:45.986114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.591 [2024-06-07 14:40:45.986119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.591 [2024-06-07 14:40:45.986126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.591 [2024-06-07 14:40:45.986137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:45.996048] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:45.996102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:45.996115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:45.996120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:45.996124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:45.996135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.006092] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.006144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.006156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.006161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.006166] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.006176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.016094] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.016145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.016157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.016162] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.016167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.016177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.026181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.026281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.026293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.026299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.026303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.026314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.036123] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.036171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.036183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.036188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.036192] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.036206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.046209] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.046255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.046266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.046271] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.046276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.046287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.056285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.056342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.056354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.056359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.056364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.056374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.066262] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.066320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.066332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.066337] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.066342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.066352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.076253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.076304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.076315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.076323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.076327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.076338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.086303] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.086354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.086366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.086371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.086376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.086386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.096359] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.096406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.096418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.096423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.096427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.096437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.106327] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.106416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.106428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.106434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.106438] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.592 [2024-06-07 14:40:46.106448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.592 qpair failed and we were unable to recover it. 00:38:22.592 [2024-06-07 14:40:46.116362] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.592 [2024-06-07 14:40:46.116405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.592 [2024-06-07 14:40:46.116417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.592 [2024-06-07 14:40:46.116422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.592 [2024-06-07 14:40:46.116427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.116437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.126432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.126475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.126487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.126492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.126496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.126507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.136428] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.136513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.136525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.136530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.136535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.136545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.146443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.146484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.146495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.146500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.146505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.146516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.156465] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.156513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.156525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.156530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.156534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.156545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.166517] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.166563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.166575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.166586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.166590] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.166600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.176538] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.176580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.176593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.176598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.176602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.176612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.186547] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.186588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.186600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.186605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.186610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.186620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.196566] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.196620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.196632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.196638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.196642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.196653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.206529] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.206573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.206588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.206593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.206597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.206608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.216646] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.216691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.216703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.216708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.216712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.216723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.226613] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.226655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.226668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.226672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.226677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.226687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.593 [2024-06-07 14:40:46.236724] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.593 [2024-06-07 14:40:46.236790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.593 [2024-06-07 14:40:46.236802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.593 [2024-06-07 14:40:46.236807] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.593 [2024-06-07 14:40:46.236811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.593 [2024-06-07 14:40:46.236822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.593 qpair failed and we were unable to recover it. 00:38:22.855 [2024-06-07 14:40:46.246746] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.855 [2024-06-07 14:40:46.246837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.855 [2024-06-07 14:40:46.246849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.855 [2024-06-07 14:40:46.246854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.855 [2024-06-07 14:40:46.246858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.855 [2024-06-07 14:40:46.246869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.855 qpair failed and we were unable to recover it. 00:38:22.855 [2024-06-07 14:40:46.256767] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.855 [2024-06-07 14:40:46.256855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.855 [2024-06-07 14:40:46.256869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.855 [2024-06-07 14:40:46.256874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.855 [2024-06-07 14:40:46.256879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.855 [2024-06-07 14:40:46.256889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.266758] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.266798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.266810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.266815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.266820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.266830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.276789] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.276836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.276847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.276852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.276856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.276867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.286813] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.286862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.286873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.286878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.286883] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.286893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.296877] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.296921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.296934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.296939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.296943] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.296957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.306858] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.306901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.306920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.306926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.306931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.306944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.316899] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.316945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.316964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.316969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.316974] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.316988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.326951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.327076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.327089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.327095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.327099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.327110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.336848] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.336897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.336910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.336915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.336919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.336930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.346975] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.347015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.347030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.347035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.347040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.347050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.356892] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.356942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.356954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.356959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.356963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.356974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.367089] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.367169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.367181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.367187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.367191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.367204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.377141] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.377189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.377203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.377208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.377212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.856 [2024-06-07 14:40:46.377223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.856 qpair failed and we were unable to recover it. 00:38:22.856 [2024-06-07 14:40:46.387090] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.856 [2024-06-07 14:40:46.387137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.856 [2024-06-07 14:40:46.387149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.856 [2024-06-07 14:40:46.387154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.856 [2024-06-07 14:40:46.387161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.387171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.397095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.397147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.397159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.397164] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.397169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.397179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.407045] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.407093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.407105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.407110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.407115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.407125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.417203] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.417247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.417259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.417265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.417269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.417279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.427067] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.427108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.427120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.427126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.427130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.427140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.437233] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.437316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.437329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.437334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.437339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.437349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.447145] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.447193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.447207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.447212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.447217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.447228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.457290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.457341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.457353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.457358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.457363] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.457373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.467308] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.467349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.467360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.467365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.467370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.467380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.477334] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.477417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.477429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.477434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.477442] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.477452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.487390] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.487433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.487445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.487450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.487455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.487465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:22.857 [2024-06-07 14:40:46.497423] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:22.857 [2024-06-07 14:40:46.497464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:22.857 [2024-06-07 14:40:46.497476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:22.857 [2024-06-07 14:40:46.497481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:22.857 [2024-06-07 14:40:46.497486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:22.857 [2024-06-07 14:40:46.497497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:22.857 qpair failed and we were unable to recover it. 00:38:23.118 [2024-06-07 14:40:46.507415] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.118 [2024-06-07 14:40:46.507458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.118 [2024-06-07 14:40:46.507470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.118 [2024-06-07 14:40:46.507475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.118 [2024-06-07 14:40:46.507480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.118 [2024-06-07 14:40:46.507490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.118 qpair failed and we were unable to recover it. 00:38:23.118 [2024-06-07 14:40:46.517435] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.118 [2024-06-07 14:40:46.517480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.118 [2024-06-07 14:40:46.517492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.118 [2024-06-07 14:40:46.517498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.118 [2024-06-07 14:40:46.517502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.118 [2024-06-07 14:40:46.517513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.118 qpair failed and we were unable to recover it. 00:38:23.118 [2024-06-07 14:40:46.527501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.118 [2024-06-07 14:40:46.527547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.118 [2024-06-07 14:40:46.527559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.118 [2024-06-07 14:40:46.527565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.118 [2024-06-07 14:40:46.527569] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.118 [2024-06-07 14:40:46.527579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.118 qpair failed and we were unable to recover it. 00:38:23.118 [2024-06-07 14:40:46.537545] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.118 [2024-06-07 14:40:46.537592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.118 [2024-06-07 14:40:46.537603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.118 [2024-06-07 14:40:46.537609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.118 [2024-06-07 14:40:46.537613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.537623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.547523] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.547572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.547584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.547589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.547593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.547604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.557472] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.557521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.557533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.557539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.557543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.557554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.567620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.567665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.567677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.567685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.567689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.567700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.577607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.577656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.577668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.577673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.577677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.577687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.587539] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.587579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.587590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.587595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.587599] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.587610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.597657] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.597716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.597728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.597733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.597737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.597747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.607715] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.607766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.607777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.607782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.607787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.607797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.617748] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.617791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.617804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.617809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.617813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.617823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.627745] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.627787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.627800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.627805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.627809] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.627819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.637777] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.637822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.637834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.637839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.637843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.637853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.647870] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.647946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.647958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.647963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.647967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.647977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.657839] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.657886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.657900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.119 [2024-06-07 14:40:46.657905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.119 [2024-06-07 14:40:46.657910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.119 [2024-06-07 14:40:46.657920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.119 qpair failed and we were unable to recover it. 00:38:23.119 [2024-06-07 14:40:46.667850] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.119 [2024-06-07 14:40:46.667891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.119 [2024-06-07 14:40:46.667903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.667908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.667913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.667923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.677882] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.677931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.677943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.677948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.677952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.677962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.687942] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.687986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.687998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.688003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.688007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.688017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.697978] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.698025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.698040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.698046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.698050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.698065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.707960] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.708002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.708014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.708019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.708024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.708034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.718002] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.718044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.718056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.718061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.718066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.718076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.728057] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.728107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.728119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.728124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.728129] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.728139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.738044] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.738089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.738101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.738106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.738111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.738122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.748088] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.748130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.748145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.748150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.748154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.748165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.120 [2024-06-07 14:40:46.758104] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.120 [2024-06-07 14:40:46.758189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.120 [2024-06-07 14:40:46.758205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.120 [2024-06-07 14:40:46.758210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.120 [2024-06-07 14:40:46.758214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.120 [2024-06-07 14:40:46.758225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.120 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.768032] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.768077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.768089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.768094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.768099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.381 [2024-06-07 14:40:46.768109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.381 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.778197] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.778247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.778259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.778264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.778269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.381 [2024-06-07 14:40:46.778279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.381 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.788186] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.788230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.788242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.788247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.788254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.381 [2024-06-07 14:40:46.788265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.381 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.798177] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.798226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.798239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.798244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.798248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.381 [2024-06-07 14:40:46.798259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.381 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.808274] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.808317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.808329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.808334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.808338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.381 [2024-06-07 14:40:46.808348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.381 qpair failed and we were unable to recover it. 00:38:23.381 [2024-06-07 14:40:46.818298] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.381 [2024-06-07 14:40:46.818343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.381 [2024-06-07 14:40:46.818355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.381 [2024-06-07 14:40:46.818360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.381 [2024-06-07 14:40:46.818364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.818375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.828279] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.828319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.828331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.828335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.828340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.828350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.838326] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.838372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.838384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.838389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.838393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.838403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.848320] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.848356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.848367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.848372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.848377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.848387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.858407] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.858448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.858460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.858465] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.858470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.858480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.868408] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.868473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.868485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.868490] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.868494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.868504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.878522] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.878569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.878581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.878586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.878597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.878608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.888443] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.888482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.888494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.888499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.888503] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.888514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.898513] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.898554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.898566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.898571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.898576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.898586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.908501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.908541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.908553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.908558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.908563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.908573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.918514] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.918570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.918582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.918587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.918592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.918602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.928601] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.928677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.928689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.928694] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.928698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.928709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.938619] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.938662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.938674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.938679] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.938684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.938694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.948634] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.382 [2024-06-07 14:40:46.948674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.382 [2024-06-07 14:40:46.948686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.382 [2024-06-07 14:40:46.948691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.382 [2024-06-07 14:40:46.948696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.382 [2024-06-07 14:40:46.948706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.382 qpair failed and we were unable to recover it. 00:38:23.382 [2024-06-07 14:40:46.958653] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:46.958718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:46.958730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:46.958735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:46.958740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:46.958750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:46.968663] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:46.968705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:46.968717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:46.968725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:46.968729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:46.968739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:46.978737] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:46.978829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:46.978841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:46.978846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:46.978851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:46.978861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:46.988726] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:46.988767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:46.988779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:46.988784] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:46.988789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:46.988799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:46.998701] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:46.998764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:46.998776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:46.998781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:46.998785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:46.998795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:47.008648] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:47.008686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:47.008698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:47.008703] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:47.008707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:47.008718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.383 [2024-06-07 14:40:47.018811] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.383 [2024-06-07 14:40:47.018857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.383 [2024-06-07 14:40:47.018870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.383 [2024-06-07 14:40:47.018874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.383 [2024-06-07 14:40:47.018879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.383 [2024-06-07 14:40:47.018889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.383 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.028830] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.028913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.028925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.028930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.028934] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.028944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.038851] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.038903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.038915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.038920] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.038924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.038935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.048886] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.048931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.048950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.048956] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.048960] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.048974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.058957] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.059004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.059025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.059032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.059036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.059050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.068951] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.069033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.069046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.069051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.069056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.069067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.078967] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.079011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.079023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.079028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.079033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.079044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.088891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.088932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.088944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.088949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.088954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.088964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.645 [2024-06-07 14:40:47.099074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.645 [2024-06-07 14:40:47.099123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.645 [2024-06-07 14:40:47.099135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.645 [2024-06-07 14:40:47.099140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.645 [2024-06-07 14:40:47.099145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.645 [2024-06-07 14:40:47.099158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.645 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.109062] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.109102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.109114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.109119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.109123] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.109134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.119074] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.119117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.119129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.119135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.119139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.119150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.129105] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.129152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.129164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.129169] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.129174] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.129184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.139036] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.139080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.139092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.139097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.139102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.139112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.149161] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.149208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.149223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.149228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.149233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.149243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.159208] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.159264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.159277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.159282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.159286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.159296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.169193] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.169234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.169246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.169251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.169256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.169266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.179253] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.179297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.179309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.179314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.179318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.179329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.189293] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.189334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.189347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.189352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.189356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.189369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.199170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.199225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.199237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.199243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.199247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.199258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.209331] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.209407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.209420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.209425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.209433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.209445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.219393] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.219492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.219506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.219512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.219517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.219528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.229382] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.646 [2024-06-07 14:40:47.229461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.646 [2024-06-07 14:40:47.229473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.646 [2024-06-07 14:40:47.229478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.646 [2024-06-07 14:40:47.229482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.646 [2024-06-07 14:40:47.229493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.646 qpair failed and we were unable to recover it. 00:38:23.646 [2024-06-07 14:40:47.239402] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.239455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.239467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.239472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.239476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.239487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.647 [2024-06-07 14:40:47.249432] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.249470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.249482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.249487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.249492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.249502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.647 [2024-06-07 14:40:47.259496] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.259572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.259584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.259589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.259594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.259605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.647 [2024-06-07 14:40:47.269368] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.269422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.269434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.269440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.269444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.269455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.647 [2024-06-07 14:40:47.279493] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.279576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.279588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.279593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.279601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.279611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.647 [2024-06-07 14:40:47.289524] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.647 [2024-06-07 14:40:47.289563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.647 [2024-06-07 14:40:47.289575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.647 [2024-06-07 14:40:47.289580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.647 [2024-06-07 14:40:47.289585] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.647 [2024-06-07 14:40:47.289596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.647 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.299620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.299691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.299703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.299708] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.299713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.299723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.309459] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.309503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.309515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.309520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.309525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.309536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.319621] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.319670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.319682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.319687] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.319691] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.319702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.329644] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.329685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.329697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.329702] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.329707] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.329717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.339751] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.339809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.339821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.339826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.339831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.339841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.349704] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.349830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.349842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.349847] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.349852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.349862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.359719] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.359761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.359773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.359778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.359783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.359793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.369727] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.369765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.369778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.369785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.369790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.369800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.379808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.379852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.379864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.379869] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.379874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.379884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.389782] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.389828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.389840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.389845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.389849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.389859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.399834] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.399888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.399907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.908 [2024-06-07 14:40:47.399913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.908 [2024-06-07 14:40:47.399918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.908 [2024-06-07 14:40:47.399931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.908 qpair failed and we were unable to recover it. 00:38:23.908 [2024-06-07 14:40:47.409837] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.908 [2024-06-07 14:40:47.409884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.908 [2024-06-07 14:40:47.409903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.409909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.409914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.409927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.419927] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.419972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.419986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.419991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.419995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.420007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.429887] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.429927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.429939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.429945] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.429949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.429960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.439808] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.439856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.439868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.439873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.439878] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.439888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.449973] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.450028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.450039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.450044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.450049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.450060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.459909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.459954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.459966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.459974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.459979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.459990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.469953] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.469995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.470007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.470012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.470017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.470027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.480076] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.480124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.480137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.480142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.480147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.480157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.490095] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.490135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.490148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.490153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.490158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.490169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.500163] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.500212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.500225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.500230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.500234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.500245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.510017] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.510058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.510070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.510076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.510080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.510091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.520185] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.520232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.520245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.520250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.520255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.520266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.530218] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.530258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.530270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.530276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.530281] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.909 [2024-06-07 14:40:47.530292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.909 qpair failed and we were unable to recover it. 00:38:23.909 [2024-06-07 14:40:47.540272] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.909 [2024-06-07 14:40:47.540318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.909 [2024-06-07 14:40:47.540330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.909 [2024-06-07 14:40:47.540335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.909 [2024-06-07 14:40:47.540339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.910 [2024-06-07 14:40:47.540350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.910 qpair failed and we were unable to recover it. 00:38:23.910 [2024-06-07 14:40:47.550255] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:23.910 [2024-06-07 14:40:47.550304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:23.910 [2024-06-07 14:40:47.550320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:23.910 [2024-06-07 14:40:47.550325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:23.910 [2024-06-07 14:40:47.550329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:23.910 [2024-06-07 14:40:47.550339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:23.910 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.560170] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.560221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.560233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.560239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.560243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.560254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.570305] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.570345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.570357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.570362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.570366] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.570377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.580378] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.580422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.580434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.580439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.580444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.580454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.590372] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.590413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.590426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.590431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.590436] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.590449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.600416] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.600460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.600472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.600477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.600482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.600492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.610425] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.610466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.610479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.610484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.610489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.610499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.620501] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.620542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.620554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.620559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.620563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.620574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.630444] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.171 [2024-06-07 14:40:47.630487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.171 [2024-06-07 14:40:47.630498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.171 [2024-06-07 14:40:47.630504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.171 [2024-06-07 14:40:47.630508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.171 [2024-06-07 14:40:47.630518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.171 qpair failed and we were unable to recover it. 00:38:24.171 [2024-06-07 14:40:47.640489] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.640531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.640549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.640554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.640558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.640569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.650544] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.650583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.650595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.650600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.650604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.650615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.660607] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.660650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.660662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.660667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.660671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.660682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.670618] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.670659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.670672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.670677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.670682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.670692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.680620] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.680668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.680680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.680685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.680692] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.680702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.690662] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.690720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.690732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.690738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.690742] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.690753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.700705] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.700751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.700763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.700768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.700772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.700783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.710579] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.710635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.710648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.710654] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.710658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.710669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.720709] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.720759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.720771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.720776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.720781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.720792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.730738] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.730779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.730792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.730797] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.730802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.730812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.740806] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.740848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.740860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.740865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.740870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.740880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.750799] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.750885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.750897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.750903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.750908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.750918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.760699] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.760791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.760804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.172 [2024-06-07 14:40:47.760810] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.172 [2024-06-07 14:40:47.760814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.172 [2024-06-07 14:40:47.760825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.172 qpair failed and we were unable to recover it. 00:38:24.172 [2024-06-07 14:40:47.770845] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.172 [2024-06-07 14:40:47.770890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.172 [2024-06-07 14:40:47.770902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.173 [2024-06-07 14:40:47.770910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.173 [2024-06-07 14:40:47.770914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.173 [2024-06-07 14:40:47.770925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.173 qpair failed and we were unable to recover it. 00:38:24.173 [2024-06-07 14:40:47.780909] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.173 [2024-06-07 14:40:47.780958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.173 [2024-06-07 14:40:47.780977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.173 [2024-06-07 14:40:47.780983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.173 [2024-06-07 14:40:47.780988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.173 [2024-06-07 14:40:47.781001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.173 qpair failed and we were unable to recover it. 00:38:24.173 [2024-06-07 14:40:47.790905] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.173 [2024-06-07 14:40:47.790955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.173 [2024-06-07 14:40:47.790974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.173 [2024-06-07 14:40:47.790980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.173 [2024-06-07 14:40:47.790984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.173 [2024-06-07 14:40:47.790998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.173 qpair failed and we were unable to recover it. 00:38:24.173 [2024-06-07 14:40:47.800928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.173 [2024-06-07 14:40:47.800978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.173 [2024-06-07 14:40:47.800997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.173 [2024-06-07 14:40:47.801003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.173 [2024-06-07 14:40:47.801008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.173 [2024-06-07 14:40:47.801021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.173 qpair failed and we were unable to recover it. 00:38:24.173 [2024-06-07 14:40:47.810946] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.173 [2024-06-07 14:40:47.810991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.173 [2024-06-07 14:40:47.811005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.173 [2024-06-07 14:40:47.811010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.173 [2024-06-07 14:40:47.811014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.173 [2024-06-07 14:40:47.811026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.173 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.821022] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.821066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.821079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.821085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.821089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.821100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.830980] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.831024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.831037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.831042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.831046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.831057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.841049] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.841093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.841106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.841111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.841116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.841126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.850930] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.851008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.851021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.851026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.851031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.851042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.861136] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.861177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.861189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.861201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.861205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.861216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.870993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.871033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.871045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.871050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.871055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.871065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.881160] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.881238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.881250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.881255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.881260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.434 [2024-06-07 14:40:47.881270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.434 qpair failed and we were unable to recover it. 00:38:24.434 [2024-06-07 14:40:47.891181] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.434 [2024-06-07 14:40:47.891230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.434 [2024-06-07 14:40:47.891242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.434 [2024-06-07 14:40:47.891247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.434 [2024-06-07 14:40:47.891252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.891262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.901218] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.901263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.901276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.901281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.901285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.901296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.911238] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.911278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.911290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.911295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.911299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.911310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.921256] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.921302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.921314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.921319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.921324] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.921335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.931290] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.931338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.931350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.931355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.931359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.931370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.941211] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.941255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.941267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.941272] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.941276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.941287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.951348] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.951391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.951407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.951412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.951416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.951428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.961350] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.961398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.961410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.961415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.961419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.961430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.971399] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.971480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.971492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.971498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.971502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.971512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.981418] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.981458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.981470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.981475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.981480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.981490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:47.991475] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:47.991516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:47.991528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:47.991532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:47.991537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:47.991566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:48.001355] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:48.001402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:48.001414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:48.001419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:48.001423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:48.001434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:48.011502] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:48.011540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:48.011552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:48.011558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:48.011562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:48.011573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:48.021520] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.435 [2024-06-07 14:40:48.021560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.435 [2024-06-07 14:40:48.021572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.435 [2024-06-07 14:40:48.021577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.435 [2024-06-07 14:40:48.021581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.435 [2024-06-07 14:40:48.021592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.435 qpair failed and we were unable to recover it. 00:38:24.435 [2024-06-07 14:40:48.031558] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.436 [2024-06-07 14:40:48.031601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.436 [2024-06-07 14:40:48.031612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.436 [2024-06-07 14:40:48.031617] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.436 [2024-06-07 14:40:48.031622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.436 [2024-06-07 14:40:48.031632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.436 qpair failed and we were unable to recover it. 00:38:24.436 [2024-06-07 14:40:48.041586] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.436 [2024-06-07 14:40:48.041666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.436 [2024-06-07 14:40:48.041680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.436 [2024-06-07 14:40:48.041685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.436 [2024-06-07 14:40:48.041690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.436 [2024-06-07 14:40:48.041700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.436 qpair failed and we were unable to recover it. 00:38:24.436 [2024-06-07 14:40:48.051577] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.436 [2024-06-07 14:40:48.051617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.436 [2024-06-07 14:40:48.051629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.436 [2024-06-07 14:40:48.051633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.436 [2024-06-07 14:40:48.051638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.436 [2024-06-07 14:40:48.051648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.436 qpair failed and we were unable to recover it. 00:38:24.436 [2024-06-07 14:40:48.061638] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.436 [2024-06-07 14:40:48.061711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.436 [2024-06-07 14:40:48.061723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.436 [2024-06-07 14:40:48.061728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.436 [2024-06-07 14:40:48.061732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.436 [2024-06-07 14:40:48.061743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.436 qpair failed and we were unable to recover it. 00:38:24.436 [2024-06-07 14:40:48.071668] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.436 [2024-06-07 14:40:48.071708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.436 [2024-06-07 14:40:48.071719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.436 [2024-06-07 14:40:48.071724] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.436 [2024-06-07 14:40:48.071729] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.436 [2024-06-07 14:40:48.071739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.436 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.081685] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.081730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.081742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.081747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.081755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.081765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.091702] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.091746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.091757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.091762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.091767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.091777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.101734] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.101779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.101790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.101795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.101800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.101810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.111773] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.111815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.111827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.111832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.111837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.111848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.121788] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.121838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.121850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.121855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.121859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.121870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.131696] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.131745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.131757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.131762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.131766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.131777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.141855] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.141892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.141904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.141909] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.141913] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.141923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.151880] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.151925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.151937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.151943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.151947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.151958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.161891] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.697 [2024-06-07 14:40:48.161957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.697 [2024-06-07 14:40:48.161976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.697 [2024-06-07 14:40:48.161982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.697 [2024-06-07 14:40:48.161987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.697 [2024-06-07 14:40:48.162000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.697 qpair failed and we were unable to recover it. 00:38:24.697 [2024-06-07 14:40:48.171928] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.171972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.171991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.171997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.172005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.172018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.181956] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.181998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.182012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.182018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.182022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.182034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.191993] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.192038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.192057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.192062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.192067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.192080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.202007] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.202057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.202071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.202076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.202080] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.202091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.211900] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.211946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.211958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.211963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.211968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.211979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.222023] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.222063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.222075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.222080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.222084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.222095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.232134] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.232176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.232188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.232193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.232201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.232212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.242038] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.242087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.242100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.242105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.242109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.242119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.252147] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.252188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.252203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.252208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.252212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.252223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.262071] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.262135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.262147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.262155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.262160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.262170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.272199] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.272247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.272260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.272265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.272269] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f217c000b90 00:38:24.698 [2024-06-07 14:40:48.272280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.282391] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.282540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.282603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.282626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.282647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2184000b90 00:38:24.698 [2024-06-07 14:40:48.282698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.292254] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.292321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.292351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.698 [2024-06-07 14:40:48.292366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.698 [2024-06-07 14:40:48.292379] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2184000b90 00:38:24.698 [2024-06-07 14:40:48.292408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:38:24.698 qpair failed and we were unable to recover it. 00:38:24.698 [2024-06-07 14:40:48.302285] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.698 [2024-06-07 14:40:48.302342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.698 [2024-06-07 14:40:48.302367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.699 [2024-06-07 14:40:48.302375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.699 [2024-06-07 14:40:48.302382] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x108c730 00:38:24.699 [2024-06-07 14:40:48.302401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:24.699 qpair failed and we were unable to recover it. 00:38:24.699 [2024-06-07 14:40:48.312338] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.699 [2024-06-07 14:40:48.312424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.699 [2024-06-07 14:40:48.312443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.699 [2024-06-07 14:40:48.312450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.699 [2024-06-07 14:40:48.312457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x108c730 00:38:24.699 [2024-06-07 14:40:48.312472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:38:24.699 qpair failed and we were unable to recover it. 00:38:24.699 [2024-06-07 14:40:48.322360] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.699 [2024-06-07 14:40:48.322481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.699 [2024-06-07 14:40:48.322543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.699 [2024-06-07 14:40:48.322567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.699 [2024-06-07 14:40:48.322587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2174000b90 00:38:24.699 [2024-06-07 14:40:48.322639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.699 qpair failed and we were unable to recover it. 00:38:24.699 [2024-06-07 14:40:48.332373] ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:24.699 [2024-06-07 14:40:48.332445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:24.699 [2024-06-07 14:40:48.332482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:24.699 [2024-06-07 14:40:48.332498] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:24.699 [2024-06-07 14:40:48.332513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2174000b90 00:38:24.699 [2024-06-07 14:40:48.332547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:38:24.699 qpair failed and we were unable to recover it. 00:38:24.699 [2024-06-07 14:40:48.332928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109a290 is same with the state(5) to be set 00:38:24.699 [2024-06-07 14:40:48.333229] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109a290 (9): Bad file descriptor 00:38:24.699 Initializing NVMe Controllers 00:38:24.699 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:24.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:24.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:24.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:24.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:24.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:24.699 Initialization complete. Launching workers. 00:38:24.699 Starting thread on core 1 00:38:24.699 Starting thread on core 2 00:38:24.699 Starting thread on core 3 00:38:24.699 Starting thread on core 0 00:38:24.699 14:40:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:24.963 00:38:24.963 real 0m11.425s 00:38:24.963 user 0m21.174s 00:38:24.963 sys 0m3.543s 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:24.963 ************************************ 00:38:24.963 END TEST nvmf_target_disconnect_tc2 00:38:24.963 ************************************ 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:24.963 rmmod nvme_tcp 00:38:24.963 rmmod nvme_fabrics 00:38:24.963 rmmod nvme_keyring 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 822654 ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 822654 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 822654 ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 822654 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 822654 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 822654' 00:38:24.963 killing process with pid 822654 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 822654 00:38:24.963 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 822654 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:25.228 14:40:48 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.139 14:40:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:27.139 00:38:27.139 real 0m22.320s 00:38:27.139 user 0m49.497s 00:38:27.139 sys 0m10.063s 00:38:27.139 14:40:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:27.139 14:40:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:27.139 ************************************ 00:38:27.139 END TEST nvmf_target_disconnect 00:38:27.139 ************************************ 00:38:27.139 14:40:50 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:38:27.139 14:40:50 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:27.139 14:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.139 14:40:50 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:38:27.139 00:38:27.139 real 31m21.944s 00:38:27.139 user 78m18.761s 00:38:27.139 sys 8m42.563s 00:38:27.139 14:40:50 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:27.139 14:40:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.139 ************************************ 00:38:27.139 END TEST nvmf_tcp 00:38:27.139 ************************************ 00:38:27.400 14:40:50 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:38:27.400 14:40:50 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:27.400 14:40:50 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:27.400 14:40:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:27.400 14:40:50 -- common/autotest_common.sh@10 -- # set +x 00:38:27.400 ************************************ 00:38:27.400 START TEST spdkcli_nvmf_tcp 00:38:27.400 ************************************ 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:27.400 * Looking for test storage... 00:38:27.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=824514 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 824514 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 824514 ']' 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:27.400 14:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:27.400 [2024-06-07 14:40:51.018056] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:38:27.400 [2024-06-07 14:40:51.018112] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824514 ] 00:38:27.400 EAL: No free 2048 kB hugepages reported on node 1 00:38:27.661 [2024-06-07 14:40:51.081641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:27.661 [2024-06-07 14:40:51.114531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.661 [2024-06-07 14:40:51.114628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:28.232 14:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:28.232 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:28.232 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:28.232 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:28.232 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:28.232 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:28.232 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:28.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.232 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:28.232 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:28.232 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:28.232 ' 00:38:30.772 [2024-06-07 14:40:54.160429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.711 [2024-06-07 14:40:55.324234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:34.250 [2024-06-07 14:40:57.458424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:35.665 [2024-06-07 14:40:59.291861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:37.573 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:37.573 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:37.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:37.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:37.573 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:37.573 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:37.573 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:37.573 14:41:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.833 14:41:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:37.833 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:37.833 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:37.833 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:37.833 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:37.833 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:37.833 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:37.833 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:37.833 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:37.833 ' 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:43.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:43.112 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:43.112 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:43.112 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 824514 ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 824514' 00:38:43.112 killing process with pid 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 824514 ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 824514 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 824514 ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 824514 00:38:43.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (824514) - No such process 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 824514 is not found' 00:38:43.112 Process with pid 824514 is not found 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:43.112 00:38:43.112 real 0m15.589s 00:38:43.112 user 0m32.278s 00:38:43.112 sys 0m0.694s 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:43.112 14:41:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.112 ************************************ 00:38:43.112 END TEST spdkcli_nvmf_tcp 00:38:43.112 ************************************ 00:38:43.112 14:41:06 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:43.112 14:41:06 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:43.112 14:41:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:43.112 14:41:06 -- common/autotest_common.sh@10 -- # set +x 00:38:43.112 ************************************ 00:38:43.112 START TEST nvmf_identify_passthru 00:38:43.112 ************************************ 00:38:43.112 14:41:06 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:43.112 * Looking for test storage... 00:38:43.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:43.112 14:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:43.112 14:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:43.112 14:41:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.112 14:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.112 14:41:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:43.112 14:41:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:43.112 14:41:06 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:38:43.112 14:41:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:51.277 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:51.277 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:51.277 Found net devices under 0000:31:00.0: cvl_0_0 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:51.277 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:51.278 Found net devices under 0000:31:00.1: cvl_0_1 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:51.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:51.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:38:51.278 00:38:51.278 --- 10.0.0.2 ping statistics --- 00:38:51.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.278 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:51.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:51.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:38:51.278 00:38:51.278 --- 10.0.0.1 ping statistics --- 00:38:51.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.278 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:51.278 14:41:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:65:00.0 00:38:51.278 14:41:14 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:65:00.0 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:51.278 14:41:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:51.278 EAL: No free 2048 kB hugepages reported on node 1 00:38:51.538 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:38:51.538 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:51.538 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:51.538 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:51.538 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=831883 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:52.109 14:41:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 831883 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 831883 ']' 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:52.109 14:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.109 [2024-06-07 14:41:15.701398] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:38:52.109 [2024-06-07 14:41:15.701482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:52.109 EAL: No free 2048 kB hugepages reported on node 1 00:38:52.369 [2024-06-07 14:41:15.773675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:52.370 [2024-06-07 14:41:15.805951] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:52.370 [2024-06-07 14:41:15.805986] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:52.370 [2024-06-07 14:41:15.805993] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:52.370 [2024-06-07 14:41:15.806000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:52.370 [2024-06-07 14:41:15.806006] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:52.370 [2024-06-07 14:41:15.806145] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:52.370 [2024-06-07 14:41:15.806285] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:52.370 [2024-06-07 14:41:15.806344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.370 [2024-06-07 14:41:15.806345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:38:52.941 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.941 INFO: Log level set to 20 00:38:52.941 INFO: Requests: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "method": "nvmf_set_config", 00:38:52.941 "id": 1, 00:38:52.941 "params": { 00:38:52.941 "admin_cmd_passthru": { 00:38:52.941 "identify_ctrlr": true 00:38:52.941 } 00:38:52.941 } 00:38:52.941 } 00:38:52.941 00:38:52.941 INFO: response: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "id": 1, 00:38:52.941 "result": true 00:38:52.941 } 00:38:52.941 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.941 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.941 INFO: Setting log level to 20 00:38:52.941 INFO: Setting log level to 20 00:38:52.941 INFO: Log level set to 20 00:38:52.941 INFO: Log level set to 20 00:38:52.941 INFO: Requests: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "method": "framework_start_init", 00:38:52.941 "id": 1 00:38:52.941 } 00:38:52.941 00:38:52.941 INFO: Requests: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "method": "framework_start_init", 00:38:52.941 "id": 1 00:38:52.941 } 00:38:52.941 00:38:52.941 [2024-06-07 14:41:16.538607] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:52.941 INFO: response: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "id": 1, 00:38:52.941 "result": true 00:38:52.941 } 00:38:52.941 00:38:52.941 INFO: response: 00:38:52.941 { 00:38:52.941 "jsonrpc": "2.0", 00:38:52.941 "id": 1, 00:38:52.941 "result": true 00:38:52.941 } 00:38:52.941 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.941 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:52.941 INFO: Setting log level to 40 00:38:52.941 INFO: Setting log level to 40 00:38:52.941 INFO: Setting log level to 40 00:38:52.941 [2024-06-07 14:41:16.551822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:52.941 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:52.941 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.201 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:53.201 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.201 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.462 Nvme0n1 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.462 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.462 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.462 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.462 [2024-06-07 14:41:16.934413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.462 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.462 [ 00:38:53.462 { 00:38:53.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:53.462 "subtype": "Discovery", 00:38:53.462 "listen_addresses": [], 00:38:53.462 "allow_any_host": true, 00:38:53.462 "hosts": [] 00:38:53.462 }, 00:38:53.462 { 00:38:53.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:53.462 "subtype": "NVMe", 00:38:53.462 "listen_addresses": [ 00:38:53.462 { 00:38:53.462 "trtype": "TCP", 00:38:53.462 "adrfam": "IPv4", 00:38:53.462 "traddr": "10.0.0.2", 00:38:53.462 "trsvcid": "4420" 00:38:53.462 } 00:38:53.462 ], 00:38:53.462 "allow_any_host": true, 00:38:53.462 "hosts": [], 00:38:53.462 "serial_number": "SPDK00000000000001", 00:38:53.462 "model_number": "SPDK bdev Controller", 00:38:53.462 "max_namespaces": 1, 00:38:53.462 "min_cntlid": 1, 00:38:53.462 "max_cntlid": 65519, 00:38:53.462 "namespaces": [ 00:38:53.462 { 00:38:53.462 "nsid": 1, 00:38:53.462 "bdev_name": "Nvme0n1", 00:38:53.462 "name": "Nvme0n1", 00:38:53.462 "nguid": "36344730526054990025384500000083", 00:38:53.462 "uuid": "36344730-5260-5499-0025-384500000083" 00:38:53.462 } 00:38:53.462 ] 00:38:53.462 } 00:38:53.462 ] 00:38:53.462 14:41:16 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.463 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:53.463 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:53.463 14:41:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:53.463 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:53.723 EAL: No free 2048 kB hugepages reported on node 1 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:38:53.723 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:53.724 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:53.724 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:53.724 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:53.724 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:53.724 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:53.724 14:41:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:53.724 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:53.983 rmmod nvme_tcp 00:38:53.983 rmmod nvme_fabrics 00:38:53.983 rmmod nvme_keyring 00:38:53.983 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:53.983 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:53.983 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:53.983 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 831883 ']' 00:38:53.983 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 831883 00:38:53.983 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 831883 ']' 00:38:53.983 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 831883 00:38:53.983 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:38:53.983 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 831883 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 831883' 00:38:53.984 killing process with pid 831883 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 831883 00:38:53.984 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 831883 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:54.243 14:41:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.243 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:54.243 14:41:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.783 14:41:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:56.783 00:38:56.783 real 0m13.298s 00:38:56.783 user 0m10.261s 00:38:56.783 sys 0m6.513s 00:38:56.783 14:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:56.783 14:41:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:56.783 ************************************ 00:38:56.783 END TEST nvmf_identify_passthru 00:38:56.783 ************************************ 00:38:56.783 14:41:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:56.783 14:41:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:38:56.783 14:41:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:56.783 14:41:19 -- common/autotest_common.sh@10 -- # set +x 00:38:56.783 ************************************ 00:38:56.783 START TEST nvmf_dif 00:38:56.783 ************************************ 00:38:56.783 14:41:19 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:56.783 * Looking for test storage... 00:38:56.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.783 14:41:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.783 14:41:19 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.783 14:41:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.783 14:41:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.783 14:41:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.783 14:41:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.783 14:41:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.783 14:41:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.783 14:41:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:56.783 14:41:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:56.783 14:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:56.783 14:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:56.783 14:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:56.783 14:41:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:56.783 14:41:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.783 14:41:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:56.783 14:41:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:56.783 14:41:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:56.783 14:41:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:04.954 14:41:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:04.955 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:04.955 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:04.955 Found net devices under 0000:31:00.0: cvl_0_0 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:04.955 Found net devices under 0000:31:00.1: cvl_0_1 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:04.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:04.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:39:04.955 00:39:04.955 --- 10.0.0.2 ping statistics --- 00:39:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.955 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:04.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:04.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:39:04.955 00:39:04.955 --- 10.0.0.1 ping statistics --- 00:39:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.955 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:04.955 14:41:27 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:08.287 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:08.287 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:08.287 14:41:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:08.287 14:41:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=838522 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 838522 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 838522 ']' 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:08.287 14:41:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.287 14:41:31 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:08.287 [2024-06-07 14:41:31.630987] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:39:08.287 [2024-06-07 14:41:31.631032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.287 EAL: No free 2048 kB hugepages reported on node 1 00:39:08.287 [2024-06-07 14:41:31.700562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.287 [2024-06-07 14:41:31.731651] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.287 [2024-06-07 14:41:31.731686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.287 [2024-06-07 14:41:31.731694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.287 [2024-06-07 14:41:31.731701] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.287 [2024-06-07 14:41:31.731706] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.287 [2024-06-07 14:41:31.731723] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 00:39:08.858 14:41:32 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 14:41:32 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.858 14:41:32 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:08.858 14:41:32 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 [2024-06-07 14:41:32.427397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.858 14:41:32 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 ************************************ 00:39:08.858 START TEST fio_dif_1_default 00:39:08.858 ************************************ 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 bdev_null0 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:08.858 [2024-06-07 14:41:32.499698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:08.858 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:09.119 { 00:39:09.119 "params": { 00:39:09.119 "name": "Nvme$subsystem", 00:39:09.119 "trtype": "$TEST_TRANSPORT", 00:39:09.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:09.119 "adrfam": "ipv4", 00:39:09.119 "trsvcid": "$NVMF_PORT", 00:39:09.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:09.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:09.119 "hdgst": ${hdgst:-false}, 00:39:09.119 "ddgst": ${ddgst:-false} 00:39:09.119 }, 00:39:09.119 "method": "bdev_nvme_attach_controller" 00:39:09.119 } 00:39:09.119 EOF 00:39:09.119 )") 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:09.119 "params": { 00:39:09.119 "name": "Nvme0", 00:39:09.119 "trtype": "tcp", 00:39:09.119 "traddr": "10.0.0.2", 00:39:09.119 "adrfam": "ipv4", 00:39:09.119 "trsvcid": "4420", 00:39:09.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:09.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:09.119 "hdgst": false, 00:39:09.119 "ddgst": false 00:39:09.119 }, 00:39:09.119 "method": "bdev_nvme_attach_controller" 00:39:09.119 }' 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:09.119 14:41:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:09.381 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:09.381 fio-3.35 00:39:09.381 Starting 1 thread 00:39:09.381 EAL: No free 2048 kB hugepages reported on node 1 00:39:21.613 00:39:21.613 filename0: (groupid=0, jobs=1): err= 0: pid=839000: Fri Jun 7 14:41:43 2024 00:39:21.613 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10029msec) 00:39:21.613 slat (nsec): min=5625, max=48176, avg=6715.12, stdev=2223.57 00:39:21.613 clat (usec): min=40743, max=42906, avg=41421.18, stdev=500.33 00:39:21.613 lat (usec): min=40751, max=42943, avg=41427.90, stdev=500.44 00:39:21.613 clat percentiles (usec): 00:39:21.613 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:21.613 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:39:21.613 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:21.613 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:21.613 | 99.99th=[42730] 00:39:21.613 bw ( KiB/s): min= 384, max= 416, per=99.72%, avg=385.60, stdev= 7.16, samples=20 00:39:21.613 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:39:21.613 lat (msec) : 50=100.00% 00:39:21.613 cpu : usr=95.18%, sys=4.60%, ctx=17, majf=0, minf=232 00:39:21.613 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:21.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:21.613 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:21.613 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:21.613 00:39:21.613 Run status group 0 (all jobs): 00:39:21.613 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3872KiB (3965kB), run=10029-10029msec 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 00:39:21.613 real 0m11.042s 00:39:21.613 user 0m23.066s 00:39:21.613 sys 0m0.796s 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 ************************************ 00:39:21.613 END TEST fio_dif_1_default 00:39:21.613 ************************************ 00:39:21.613 14:41:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:21.613 14:41:43 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:21.613 14:41:43 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 ************************************ 00:39:21.613 START TEST fio_dif_1_multi_subsystems 00:39:21.613 ************************************ 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 bdev_null0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 [2024-06-07 14:41:43.620737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 bdev_null1 00:39:21.613 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:21.614 { 00:39:21.614 "params": { 00:39:21.614 "name": "Nvme$subsystem", 00:39:21.614 "trtype": "$TEST_TRANSPORT", 00:39:21.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:21.614 "adrfam": "ipv4", 00:39:21.614 "trsvcid": "$NVMF_PORT", 00:39:21.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:21.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:21.614 "hdgst": ${hdgst:-false}, 00:39:21.614 "ddgst": ${ddgst:-false} 00:39:21.614 }, 00:39:21.614 "method": "bdev_nvme_attach_controller" 00:39:21.614 } 00:39:21.614 EOF 00:39:21.614 )") 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:21.614 { 00:39:21.614 "params": { 00:39:21.614 "name": "Nvme$subsystem", 00:39:21.614 "trtype": "$TEST_TRANSPORT", 00:39:21.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:21.614 "adrfam": "ipv4", 00:39:21.614 "trsvcid": "$NVMF_PORT", 00:39:21.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:21.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:21.614 "hdgst": ${hdgst:-false}, 00:39:21.614 "ddgst": ${ddgst:-false} 00:39:21.614 }, 00:39:21.614 "method": "bdev_nvme_attach_controller" 00:39:21.614 } 00:39:21.614 EOF 00:39:21.614 )") 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:21.614 "params": { 00:39:21.614 "name": "Nvme0", 00:39:21.614 "trtype": "tcp", 00:39:21.614 "traddr": "10.0.0.2", 00:39:21.614 "adrfam": "ipv4", 00:39:21.614 "trsvcid": "4420", 00:39:21.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:21.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:21.614 "hdgst": false, 00:39:21.614 "ddgst": false 00:39:21.614 }, 00:39:21.614 "method": "bdev_nvme_attach_controller" 00:39:21.614 },{ 00:39:21.614 "params": { 00:39:21.614 "name": "Nvme1", 00:39:21.614 "trtype": "tcp", 00:39:21.614 "traddr": "10.0.0.2", 00:39:21.614 "adrfam": "ipv4", 00:39:21.614 "trsvcid": "4420", 00:39:21.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:21.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:21.614 "hdgst": false, 00:39:21.614 "ddgst": false 00:39:21.614 }, 00:39:21.614 "method": "bdev_nvme_attach_controller" 00:39:21.614 }' 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:21.614 14:41:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:21.614 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:21.614 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:21.614 fio-3.35 00:39:21.614 Starting 2 threads 00:39:21.614 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.636 00:39:31.636 filename0: (groupid=0, jobs=1): err= 0: pid=841261: Fri Jun 7 14:41:54 2024 00:39:31.636 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10026msec) 00:39:31.636 slat (nsec): min=5632, max=65437, avg=6914.61, stdev=2785.23 00:39:31.636 clat (usec): min=40809, max=42307, avg=41236.69, stdev=429.98 00:39:31.636 lat (usec): min=40817, max=42340, avg=41243.61, stdev=430.46 00:39:31.636 clat percentiles (usec): 00:39:31.636 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:31.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:31.636 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:39:31.636 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:31.636 | 99.99th=[42206] 00:39:31.636 bw ( KiB/s): min= 384, max= 416, per=49.85%, avg=387.20, stdev= 9.85, samples=20 00:39:31.636 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:39:31.636 lat (msec) : 50=100.00% 00:39:31.636 cpu : usr=97.00%, sys=2.76%, ctx=13, majf=0, minf=183 00:39:31.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.636 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:31.636 filename1: (groupid=0, jobs=1): err= 0: pid=841262: Fri Jun 7 14:41:54 2024 00:39:31.636 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10037msec) 00:39:31.636 slat (nsec): min=5622, max=31244, avg=6990.21, stdev=2074.62 00:39:31.636 clat (usec): min=40813, max=43048, avg=41112.42, stdev=350.83 00:39:31.636 lat (usec): min=40819, max=43057, avg=41119.41, stdev=351.27 00:39:31.636 clat percentiles (usec): 00:39:31.636 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:31.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:31.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:39:31.636 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:39:31.636 | 99.99th=[43254] 00:39:31.636 bw ( KiB/s): min= 384, max= 416, per=49.98%, avg=388.80, stdev=11.72, samples=20 00:39:31.636 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:31.636 lat (msec) : 50=100.00% 00:39:31.636 cpu : usr=96.80%, sys=2.96%, ctx=13, majf=0, minf=122 00:39:31.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.636 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:31.636 00:39:31.636 Run status group 0 (all jobs): 00:39:31.636 READ: bw=776KiB/s (795kB/s), 388KiB/s-389KiB/s (397kB/s-398kB/s), io=7792KiB (7979kB), run=10026-10037msec 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 00:39:31.636 real 0m11.265s 00:39:31.636 user 0m30.977s 00:39:31.636 sys 0m0.928s 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 ************************************ 00:39:31.636 END TEST fio_dif_1_multi_subsystems 00:39:31.636 ************************************ 00:39:31.636 14:41:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:31.636 14:41:54 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:31.636 14:41:54 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 ************************************ 00:39:31.636 START TEST fio_dif_rand_params 00:39:31.636 ************************************ 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 bdev_null0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:31.636 [2024-06-07 14:41:54.968203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:31.636 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:31.637 { 00:39:31.637 "params": { 00:39:31.637 "name": "Nvme$subsystem", 00:39:31.637 "trtype": "$TEST_TRANSPORT", 00:39:31.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:31.637 "adrfam": "ipv4", 00:39:31.637 "trsvcid": "$NVMF_PORT", 00:39:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:31.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:31.637 "hdgst": ${hdgst:-false}, 00:39:31.637 "ddgst": ${ddgst:-false} 00:39:31.637 }, 00:39:31.637 "method": "bdev_nvme_attach_controller" 00:39:31.637 } 00:39:31.637 EOF 00:39:31.637 )") 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:31.637 14:41:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:31.637 "params": { 00:39:31.637 "name": "Nvme0", 00:39:31.637 "trtype": "tcp", 00:39:31.637 "traddr": "10.0.0.2", 00:39:31.637 "adrfam": "ipv4", 00:39:31.637 "trsvcid": "4420", 00:39:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:31.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:31.637 "hdgst": false, 00:39:31.637 "ddgst": false 00:39:31.637 }, 00:39:31.637 "method": "bdev_nvme_attach_controller" 00:39:31.637 }' 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:31.637 14:41:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:31.905 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:31.905 ... 00:39:31.905 fio-3.35 00:39:31.905 Starting 3 threads 00:39:31.905 EAL: No free 2048 kB hugepages reported on node 1 00:39:38.482 00:39:38.482 filename0: (groupid=0, jobs=1): err= 0: pid=843455: Fri Jun 7 14:42:00 2024 00:39:38.482 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(151MiB/5047msec) 00:39:38.482 slat (nsec): min=8210, max=46341, avg=9014.92, stdev=1617.06 00:39:38.482 clat (usec): min=4773, max=92038, avg=12485.84, stdev=11462.29 00:39:38.482 lat (usec): min=4782, max=92047, avg=12494.86, stdev=11462.21 00:39:38.482 clat percentiles (usec): 00:39:38.482 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 8029], 00:39:38.482 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:39:38.482 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11469], 95.00th=[49546], 00:39:38.482 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[91751], 00:39:38.482 | 99.99th=[91751] 00:39:38.482 bw ( KiB/s): min=20992, max=42240, per=35.17%, avg=30848.00, stdev=5457.67, samples=10 00:39:38.482 iops : min= 164, max= 330, avg=241.00, stdev=42.64, samples=10 00:39:38.482 lat (msec) : 10=65.81%, 20=25.91%, 50=3.73%, 100=4.55% 00:39:38.482 cpu : usr=96.25%, sys=3.51%, ctx=9, majf=0, minf=59 00:39:38.482 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 issued rwts: total=1208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:38.482 filename0: (groupid=0, jobs=1): err= 0: pid=843456: Fri Jun 7 14:42:00 2024 00:39:38.482 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5005msec) 00:39:38.482 slat (nsec): min=5678, max=32308, avg=8751.03, stdev=1531.48 00:39:38.482 clat (usec): min=4424, max=53128, avg=12876.00, stdev=8869.06 00:39:38.482 lat (usec): min=4433, max=53135, avg=12884.75, stdev=8868.96 00:39:38.482 clat percentiles (usec): 00:39:38.482 | 1.00th=[ 5407], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8356], 00:39:38.482 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11207], 60.00th=[11994], 00:39:38.482 | 70.00th=[12911], 80.00th=[13698], 90.00th=[14615], 95.00th=[46400], 00:39:38.482 | 99.00th=[49546], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:39:38.482 | 99.99th=[53216] 00:39:38.482 bw ( KiB/s): min=22784, max=35840, per=33.92%, avg=29747.20, stdev=4404.06, samples=10 00:39:38.482 iops : min= 178, max= 280, avg=232.40, stdev=34.41, samples=10 00:39:38.482 lat (msec) : 10=34.94%, 20=59.66%, 50=4.46%, 100=0.94% 00:39:38.482 cpu : usr=95.80%, sys=3.96%, ctx=8, majf=0, minf=99 00:39:38.482 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 issued rwts: total=1165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:38.482 filename0: (groupid=0, jobs=1): err= 0: pid=843457: Fri Jun 7 14:42:00 2024 00:39:38.482 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5046msec) 00:39:38.482 slat (nsec): min=5832, max=44164, avg=8808.52, stdev=2764.13 00:39:38.482 clat (usec): min=5674, max=88507, avg=13901.67, stdev=9373.12 00:39:38.482 lat (usec): min=5680, max=88516, avg=13910.48, stdev=9373.39 00:39:38.482 clat percentiles (usec): 00:39:38.482 | 1.00th=[ 6325], 5.00th=[ 7570], 10.00th=[ 8225], 20.00th=[ 8979], 00:39:38.482 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11863], 60.00th=[12911], 00:39:38.482 | 70.00th=[14222], 80.00th=[15270], 90.00th=[16581], 95.00th=[18482], 00:39:38.482 | 99.00th=[52167], 99.50th=[55313], 99.90th=[87557], 99.95th=[88605], 00:39:38.482 | 99.99th=[88605] 00:39:38.482 bw ( KiB/s): min=22272, max=31232, per=31.59%, avg=27703.80, stdev=3363.31, samples=10 00:39:38.482 iops : min= 174, max= 244, avg=216.40, stdev=26.33, samples=10 00:39:38.482 lat (msec) : 10=26.73%, 20=68.39%, 50=2.12%, 100=2.76% 00:39:38.482 cpu : usr=87.91%, sys=7.61%, ctx=692, majf=0, minf=105 00:39:38.482 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.482 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.482 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:38.482 00:39:38.482 Run status group 0 (all jobs): 00:39:38.482 READ: bw=85.6MiB/s (89.8MB/s), 26.9MiB/s-29.9MiB/s (28.2MB/s-31.4MB/s), io=432MiB (453MB), run=5005-5047msec 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 bdev_null0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.482 [2024-06-07 14:42:01.091423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:38.482 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 bdev_null1 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 bdev_null2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:38.483 { 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme$subsystem", 00:39:38.483 "trtype": "$TEST_TRANSPORT", 00:39:38.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "$NVMF_PORT", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.483 "hdgst": ${hdgst:-false}, 00:39:38.483 "ddgst": ${ddgst:-false} 00:39:38.483 }, 00:39:38.483 "method": "bdev_nvme_attach_controller" 00:39:38.483 } 00:39:38.483 EOF 00:39:38.483 )") 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:38.483 { 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme$subsystem", 00:39:38.483 "trtype": "$TEST_TRANSPORT", 00:39:38.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "$NVMF_PORT", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.483 "hdgst": ${hdgst:-false}, 00:39:38.483 "ddgst": ${ddgst:-false} 00:39:38.483 }, 00:39:38.483 "method": "bdev_nvme_attach_controller" 00:39:38.483 } 00:39:38.483 EOF 00:39:38.483 )") 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:38.483 { 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme$subsystem", 00:39:38.483 "trtype": "$TEST_TRANSPORT", 00:39:38.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "$NVMF_PORT", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.483 "hdgst": ${hdgst:-false}, 00:39:38.483 "ddgst": ${ddgst:-false} 00:39:38.483 }, 00:39:38.483 "method": "bdev_nvme_attach_controller" 00:39:38.483 } 00:39:38.483 EOF 00:39:38.483 )") 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:38.483 14:42:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme0", 00:39:38.483 "trtype": "tcp", 00:39:38.483 "traddr": "10.0.0.2", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "4420", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:38.483 "hdgst": false, 00:39:38.483 "ddgst": false 00:39:38.483 }, 00:39:38.483 "method": "bdev_nvme_attach_controller" 00:39:38.483 },{ 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme1", 00:39:38.483 "trtype": "tcp", 00:39:38.483 "traddr": "10.0.0.2", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "4420", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.483 "hdgst": false, 00:39:38.483 "ddgst": false 00:39:38.483 }, 00:39:38.483 "method": "bdev_nvme_attach_controller" 00:39:38.483 },{ 00:39:38.483 "params": { 00:39:38.483 "name": "Nvme2", 00:39:38.483 "trtype": "tcp", 00:39:38.483 "traddr": "10.0.0.2", 00:39:38.483 "adrfam": "ipv4", 00:39:38.483 "trsvcid": "4420", 00:39:38.483 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:38.483 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:38.483 "hdgst": false, 00:39:38.483 "ddgst": false 00:39:38.483 }, 00:39:38.484 "method": "bdev_nvme_attach_controller" 00:39:38.484 }' 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:38.484 14:42:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.484 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:38.484 ... 00:39:38.484 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:38.484 ... 00:39:38.484 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:38.484 ... 00:39:38.484 fio-3.35 00:39:38.484 Starting 24 threads 00:39:38.484 EAL: No free 2048 kB hugepages reported on node 1 00:39:50.766 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845069: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=524, BW=2098KiB/s (2148kB/s)(20.5MiB/10023msec) 00:39:50.766 slat (usec): min=5, max=185, avg=18.42, stdev=20.27 00:39:50.766 clat (usec): min=2051, max=51462, avg=30363.45, stdev=6299.11 00:39:50.766 lat (usec): min=2062, max=51479, avg=30381.87, stdev=6300.56 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[ 2245], 5.00th=[18482], 10.00th=[24249], 20.00th=[31327], 00:39:50.766 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.766 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.766 | 99.00th=[42206], 99.50th=[45351], 99.90th=[51119], 99.95th=[51643], 00:39:50.766 | 99.99th=[51643] 00:39:50.766 bw ( KiB/s): min= 1920, max= 3328, per=4.39%, avg=2096.00, stdev=314.52, samples=20 00:39:50.766 iops : min= 480, max= 832, avg=524.00, stdev=78.63, samples=20 00:39:50.766 lat (msec) : 4=2.74%, 10=0.30%, 20=3.60%, 50=93.25%, 100=0.11% 00:39:50.766 cpu : usr=98.96%, sys=0.68%, ctx=32, majf=0, minf=33 00:39:50.766 IO depths : 1=5.4%, 2=10.9%, 4=22.6%, 8=53.8%, 16=7.2%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=5256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845070: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:39:50.766 slat (usec): min=5, max=164, avg=28.66, stdev=16.46 00:39:50.766 clat (usec): min=13027, max=57502, avg=32102.60, stdev=2379.35 00:39:50.766 lat (usec): min=13035, max=57537, avg=32131.26, stdev=2379.87 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[26346], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.766 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.766 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.766 | 99.00th=[35390], 99.50th=[50070], 99.90th=[57410], 99.95th=[57410], 00:39:50.766 | 99.99th=[57410] 00:39:50.766 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1967.16, stdev=76.45, samples=19 00:39:50.766 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:50.766 lat (msec) : 20=0.83%, 50=98.77%, 100=0.40% 00:39:50.766 cpu : usr=99.21%, sys=0.45%, ctx=30, majf=0, minf=19 00:39:50.766 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845071: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.7MiB/10009msec) 00:39:50.766 slat (usec): min=5, max=159, avg=27.89, stdev=23.22 00:39:50.766 clat (usec): min=10576, max=72583, avg=31441.38, stdev=4423.12 00:39:50.766 lat (usec): min=10582, max=72603, avg=31469.27, stdev=4426.07 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[15533], 5.00th=[22938], 10.00th=[27919], 20.00th=[31589], 00:39:50.766 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:39:50.766 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33162], 95.00th=[35390], 00:39:50.766 | 99.00th=[46924], 99.50th=[51119], 99.90th=[58459], 99.95th=[58459], 00:39:50.766 | 99.99th=[72877] 00:39:50.766 bw ( KiB/s): min= 1840, max= 2288, per=4.22%, avg=2014.32, stdev=105.18, samples=19 00:39:50.766 iops : min= 460, max= 572, avg=503.58, stdev=26.29, samples=19 00:39:50.766 lat (msec) : 20=3.25%, 50=96.04%, 100=0.71% 00:39:50.766 cpu : usr=98.52%, sys=0.84%, ctx=91, majf=0, minf=18 00:39:50.766 IO depths : 1=3.8%, 2=8.0%, 4=18.3%, 8=60.5%, 16=9.3%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=92.4%, 8=2.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=5052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845072: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=506, BW=2026KiB/s (2075kB/s)(19.8MiB/10012msec) 00:39:50.766 slat (usec): min=5, max=106, avg=17.78, stdev=14.00 00:39:50.766 clat (usec): min=11469, max=57730, avg=31430.80, stdev=4462.30 00:39:50.766 lat (usec): min=11479, max=57739, avg=31448.58, stdev=4463.68 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[13960], 5.00th=[22676], 10.00th=[27395], 20.00th=[31589], 00:39:50.766 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.766 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33817], 00:39:50.766 | 99.00th=[49021], 99.50th=[52167], 99.90th=[57410], 99.95th=[57934], 00:39:50.766 | 99.99th=[57934] 00:39:50.766 bw ( KiB/s): min= 1920, max= 2356, per=4.24%, avg=2024.85, stdev=113.72, samples=20 00:39:50.766 iops : min= 480, max= 589, avg=506.10, stdev=28.53, samples=20 00:39:50.766 lat (msec) : 20=3.57%, 50=95.60%, 100=0.83% 00:39:50.766 cpu : usr=99.09%, sys=0.57%, ctx=23, majf=0, minf=25 00:39:50.766 IO depths : 1=4.6%, 2=9.5%, 4=21.5%, 8=56.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=5072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845073: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10013msec) 00:39:50.766 slat (usec): min=5, max=104, avg=32.21, stdev=17.49 00:39:50.766 clat (usec): min=13065, max=53454, avg=32121.38, stdev=2072.74 00:39:50.766 lat (usec): min=13072, max=53471, avg=32153.60, stdev=2072.74 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[27132], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.766 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.766 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.766 | 99.00th=[38011], 99.50th=[41681], 99.90th=[53216], 99.95th=[53216], 00:39:50.766 | 99.99th=[53216] 00:39:50.766 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1967.16, stdev=76.45, samples=19 00:39:50.766 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:50.766 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:39:50.766 cpu : usr=98.98%, sys=0.69%, ctx=15, majf=0, minf=20 00:39:50.766 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845074: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10001msec) 00:39:50.766 slat (usec): min=5, max=113, avg=30.82, stdev=18.59 00:39:50.766 clat (usec): min=24851, max=39862, avg=32101.72, stdev=985.98 00:39:50.766 lat (usec): min=24867, max=39876, avg=32132.54, stdev=984.80 00:39:50.766 clat percentiles (usec): 00:39:50.766 | 1.00th=[27657], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.766 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.766 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.766 | 99.00th=[34341], 99.50th=[36963], 99.90th=[39584], 99.95th=[39584], 00:39:50.766 | 99.99th=[40109] 00:39:50.766 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1974.05, stdev=64.79, samples=19 00:39:50.766 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:50.766 lat (msec) : 50=100.00% 00:39:50.766 cpu : usr=99.19%, sys=0.49%, ctx=21, majf=0, minf=21 00:39:50.766 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:50.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.766 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.766 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.766 filename0: (groupid=0, jobs=1): err= 0: pid=845075: Fri Jun 7 14:42:12 2024 00:39:50.766 read: IOPS=496, BW=1987KiB/s (2034kB/s)(19.4MiB/10019msec) 00:39:50.766 slat (nsec): min=5684, max=94755, avg=10958.57, stdev=9792.66 00:39:50.766 clat (usec): min=13921, max=54887, avg=32121.81, stdev=1752.45 00:39:50.766 lat (usec): min=13928, max=54894, avg=32132.77, stdev=1751.71 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[24773], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:39:50.767 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.767 | 99.00th=[34341], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:39:50.767 | 99.99th=[54789] 00:39:50.767 bw ( KiB/s): min= 1920, max= 2052, per=4.16%, avg=1986.35, stdev=66.55, samples=20 00:39:50.767 iops : min= 480, max= 513, avg=496.40, stdev=16.83, samples=20 00:39:50.767 lat (msec) : 20=0.64%, 50=99.32%, 100=0.04% 00:39:50.767 cpu : usr=99.10%, sys=0.55%, ctx=25, majf=0, minf=36 00:39:50.767 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename0: (groupid=0, jobs=1): err= 0: pid=845076: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=497, BW=1992KiB/s (2039kB/s)(19.5MiB/10002msec) 00:39:50.767 slat (usec): min=5, max=140, avg=34.35, stdev=26.32 00:39:50.767 clat (usec): min=14318, max=55850, avg=31793.76, stdev=2759.67 00:39:50.767 lat (usec): min=14326, max=55883, avg=31828.11, stdev=2761.77 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[20841], 5.00th=[28443], 10.00th=[31327], 20.00th=[31589], 00:39:50.767 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.767 | 99.00th=[39584], 99.50th=[46400], 99.90th=[55837], 99.95th=[55837], 00:39:50.767 | 99.99th=[55837] 00:39:50.767 bw ( KiB/s): min= 1840, max= 2240, per=4.17%, avg=1989.05, stdev=95.41, samples=19 00:39:50.767 iops : min= 460, max= 560, avg=497.26, stdev=23.85, samples=19 00:39:50.767 lat (msec) : 20=0.86%, 50=98.90%, 100=0.24% 00:39:50.767 cpu : usr=98.66%, sys=0.79%, ctx=69, majf=0, minf=21 00:39:50.767 IO depths : 1=5.6%, 2=11.3%, 4=23.2%, 8=52.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845077: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=494, BW=1978KiB/s (2025kB/s)(19.3MiB/10012msec) 00:39:50.767 slat (usec): min=5, max=136, avg=25.38, stdev=19.14 00:39:50.767 clat (usec): min=12106, max=56062, avg=32144.74, stdev=2193.15 00:39:50.767 lat (usec): min=12117, max=56093, avg=32170.13, stdev=2193.52 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[23725], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:39:50.767 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:39:50.767 | 99.00th=[39584], 99.50th=[40633], 99.90th=[55837], 99.95th=[55837], 00:39:50.767 | 99.99th=[55837] 00:39:50.767 bw ( KiB/s): min= 1920, max= 2096, per=4.14%, avg=1975.75, stdev=67.96, samples=20 00:39:50.767 iops : min= 480, max= 524, avg=493.60, stdev=17.27, samples=20 00:39:50.767 lat (msec) : 20=0.24%, 50=99.64%, 100=0.12% 00:39:50.767 cpu : usr=99.13%, sys=0.52%, ctx=21, majf=0, minf=28 00:39:50.767 IO depths : 1=5.7%, 2=11.3%, 4=23.2%, 8=52.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845078: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=495, BW=1980KiB/s (2028kB/s)(19.3MiB/10004msec) 00:39:50.767 slat (usec): min=5, max=131, avg=28.82, stdev=20.40 00:39:50.767 clat (usec): min=10733, max=81520, avg=32070.00, stdev=3140.10 00:39:50.767 lat (usec): min=10744, max=81536, avg=32098.81, stdev=3140.42 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[21103], 5.00th=[31065], 10.00th=[31589], 20.00th=[31589], 00:39:50.767 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.767 | 99.00th=[39584], 99.50th=[47973], 99.90th=[81265], 99.95th=[81265], 00:39:50.767 | 99.99th=[81265] 00:39:50.767 bw ( KiB/s): min= 1715, max= 2048, per=4.13%, avg=1970.68, stdev=83.35, samples=19 00:39:50.767 iops : min= 428, max= 512, avg=492.63, stdev=20.97, samples=19 00:39:50.767 lat (msec) : 20=0.95%, 50=98.69%, 100=0.36% 00:39:50.767 cpu : usr=98.74%, sys=0.74%, ctx=250, majf=0, minf=25 00:39:50.767 IO depths : 1=4.2%, 2=8.6%, 4=17.8%, 8=59.5%, 16=10.0%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=91.4%, 8=4.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845079: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=507, BW=2030KiB/s (2079kB/s)(19.8MiB/10006msec) 00:39:50.767 slat (nsec): min=5633, max=92223, avg=14246.06, stdev=13757.07 00:39:50.767 clat (usec): min=8359, max=58211, avg=31417.77, stdev=3978.35 00:39:50.767 lat (usec): min=8371, max=58218, avg=31432.02, stdev=3979.79 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[13304], 5.00th=[24249], 10.00th=[28443], 20.00th=[31589], 00:39:50.767 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33817], 00:39:50.767 | 99.00th=[39584], 99.50th=[40109], 99.90th=[56361], 99.95th=[57934], 00:39:50.767 | 99.99th=[58459] 00:39:50.767 bw ( KiB/s): min= 1920, max= 2292, per=4.24%, avg=2027.10, stdev=111.82, samples=20 00:39:50.767 iops : min= 480, max= 573, avg=506.55, stdev=28.18, samples=20 00:39:50.767 lat (msec) : 10=0.12%, 20=3.03%, 50=96.53%, 100=0.32% 00:39:50.767 cpu : usr=98.93%, sys=0.73%, ctx=19, majf=0, minf=21 00:39:50.767 IO depths : 1=5.0%, 2=10.2%, 4=22.0%, 8=55.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=5078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845080: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:39:50.767 slat (usec): min=5, max=117, avg=23.07, stdev=17.82 00:39:50.767 clat (usec): min=14031, max=49952, avg=32086.94, stdev=1569.14 00:39:50.767 lat (usec): min=14037, max=49970, avg=32110.01, stdev=1569.33 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[25560], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:50.767 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.767 | 99.00th=[33817], 99.50th=[38536], 99.90th=[39060], 99.95th=[46400], 00:39:50.767 | 99.99th=[50070] 00:39:50.767 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=1980.79, stdev=78.18, samples=19 00:39:50.767 iops : min= 480, max= 544, avg=495.16, stdev=19.58, samples=19 00:39:50.767 lat (msec) : 20=0.40%, 50=99.60% 00:39:50.767 cpu : usr=99.11%, sys=0.55%, ctx=14, majf=0, minf=22 00:39:50.767 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845081: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=497, BW=1990KiB/s (2037kB/s)(19.4MiB/10004msec) 00:39:50.767 slat (usec): min=5, max=107, avg=16.44, stdev=14.21 00:39:50.767 clat (usec): min=7455, max=56821, avg=32068.43, stdev=4750.43 00:39:50.767 lat (usec): min=7461, max=56839, avg=32084.87, stdev=4750.64 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[15401], 5.00th=[23987], 10.00th=[27919], 20.00th=[31589], 00:39:50.767 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:50.767 | 70.00th=[32637], 80.00th=[32900], 90.00th=[35390], 95.00th=[39584], 00:39:50.767 | 99.00th=[52167], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:39:50.767 | 99.99th=[56886] 00:39:50.767 bw ( KiB/s): min= 1795, max= 2112, per=4.15%, avg=1981.63, stdev=81.25, samples=19 00:39:50.767 iops : min= 448, max= 528, avg=495.37, stdev=20.41, samples=19 00:39:50.767 lat (msec) : 10=0.12%, 20=1.35%, 50=97.43%, 100=1.11% 00:39:50.767 cpu : usr=99.10%, sys=0.55%, ctx=15, majf=0, minf=25 00:39:50.767 IO depths : 1=1.3%, 2=3.2%, 4=9.0%, 8=72.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=90.5%, 8=6.7%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.767 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.767 filename1: (groupid=0, jobs=1): err= 0: pid=845082: Fri Jun 7 14:42:12 2024 00:39:50.767 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.3MiB/10001msec) 00:39:50.767 slat (nsec): min=5633, max=84974, avg=17276.69, stdev=13962.73 00:39:50.767 clat (usec): min=15530, max=45497, avg=32187.68, stdev=1419.18 00:39:50.767 lat (usec): min=15545, max=45507, avg=32204.95, stdev=1418.26 00:39:50.767 clat percentiles (usec): 00:39:50.767 | 1.00th=[26346], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:39:50.767 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.767 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.767 | 99.00th=[35390], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:39:50.767 | 99.99th=[45351] 00:39:50.767 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1976.42, stdev=63.64, samples=19 00:39:50.767 iops : min= 480, max= 512, avg=494.11, stdev=15.91, samples=19 00:39:50.767 lat (msec) : 20=0.20%, 50=99.80% 00:39:50.767 cpu : usr=99.10%, sys=0.56%, ctx=17, majf=0, minf=20 00:39:50.767 IO depths : 1=5.8%, 2=11.7%, 4=24.0%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:39:50.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.767 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename1: (groupid=0, jobs=1): err= 0: pid=845083: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10009msec) 00:39:50.768 slat (nsec): min=5779, max=98399, avg=27819.56, stdev=16576.94 00:39:50.768 clat (usec): min=12897, max=58016, avg=32067.53, stdev=2691.06 00:39:50.768 lat (usec): min=12906, max=58027, avg=32095.35, stdev=2691.32 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[17957], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.768 | 99.00th=[39060], 99.50th=[49546], 99.90th=[57934], 99.95th=[57934], 00:39:50.768 | 99.99th=[57934] 00:39:50.768 bw ( KiB/s): min= 1840, max= 2048, per=4.13%, avg=1972.63, stdev=67.32, samples=19 00:39:50.768 iops : min= 460, max= 512, avg=493.16, stdev=16.83, samples=19 00:39:50.768 lat (msec) : 20=1.23%, 50=98.31%, 100=0.46% 00:39:50.768 cpu : usr=99.03%, sys=0.64%, ctx=15, majf=0, minf=20 00:39:50.768 IO depths : 1=5.0%, 2=10.1%, 4=20.6%, 8=55.8%, 16=8.4%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=92.6%, 8=2.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename1: (groupid=0, jobs=1): err= 0: pid=845084: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=496, BW=1985KiB/s (2032kB/s)(19.4MiB/10009msec) 00:39:50.768 slat (usec): min=5, max=116, avg=30.74, stdev=18.44 00:39:50.768 clat (usec): min=13921, max=45167, avg=31963.40, stdev=1809.94 00:39:50.768 lat (usec): min=13927, max=45189, avg=31994.14, stdev=1811.49 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[24249], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.768 | 99.00th=[35914], 99.50th=[39060], 99.90th=[44827], 99.95th=[45351], 00:39:50.768 | 99.99th=[45351] 00:39:50.768 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1973.89, stdev=64.93, samples=19 00:39:50.768 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:50.768 lat (msec) : 20=0.52%, 50=99.48% 00:39:50.768 cpu : usr=99.00%, sys=0.67%, ctx=17, majf=0, minf=21 00:39:50.768 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845085: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10005msec) 00:39:50.768 slat (usec): min=6, max=130, avg=30.90, stdev=17.55 00:39:50.768 clat (usec): min=4940, max=57787, avg=32081.85, stdev=2284.46 00:39:50.768 lat (usec): min=4949, max=57811, avg=32112.75, stdev=2284.92 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.768 | 99.00th=[35390], 99.50th=[39060], 99.90th=[57934], 99.95th=[57934], 00:39:50.768 | 99.99th=[57934] 00:39:50.768 bw ( KiB/s): min= 1792, max= 2048, per=4.12%, avg=1967.16, stdev=76.45, samples=19 00:39:50.768 iops : min= 448, max= 512, avg=491.79, stdev=19.11, samples=19 00:39:50.768 lat (msec) : 10=0.04%, 20=0.57%, 50=99.07%, 100=0.32% 00:39:50.768 cpu : usr=99.24%, sys=0.43%, ctx=24, majf=0, minf=20 00:39:50.768 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845086: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.5MiB/10008msec) 00:39:50.768 slat (usec): min=5, max=131, avg=25.22, stdev=18.34 00:39:50.768 clat (usec): min=16219, max=71083, avg=31825.61, stdev=3157.92 00:39:50.768 lat (usec): min=16227, max=71109, avg=31850.84, stdev=3159.26 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[18220], 5.00th=[26608], 10.00th=[31327], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.768 | 99.00th=[39584], 99.50th=[43254], 99.90th=[70779], 99.95th=[70779], 00:39:50.768 | 99.99th=[70779] 00:39:50.768 bw ( KiB/s): min= 1920, max= 2208, per=4.18%, avg=1996.63, stdev=82.72, samples=19 00:39:50.768 iops : min= 480, max= 552, avg=499.16, stdev=20.68, samples=19 00:39:50.768 lat (msec) : 20=1.20%, 50=98.48%, 100=0.32% 00:39:50.768 cpu : usr=99.21%, sys=0.45%, ctx=14, majf=0, minf=22 00:39:50.768 IO depths : 1=5.6%, 2=11.1%, 4=22.9%, 8=53.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845087: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10010msec) 00:39:50.768 slat (usec): min=5, max=104, avg=29.16, stdev=18.47 00:39:50.768 clat (usec): min=25049, max=41595, avg=32148.40, stdev=1001.88 00:39:50.768 lat (usec): min=25089, max=41626, avg=32177.56, stdev=1000.56 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[28443], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.768 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39584], 99.95th=[41681], 00:39:50.768 | 99.99th=[41681] 00:39:50.768 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1973.89, stdev=64.93, samples=19 00:39:50.768 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:50.768 lat (msec) : 50=100.00% 00:39:50.768 cpu : usr=98.89%, sys=0.77%, ctx=26, majf=0, minf=23 00:39:50.768 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845088: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=496, BW=1984KiB/s (2032kB/s)(19.4MiB/10012msec) 00:39:50.768 slat (usec): min=5, max=105, avg=18.97, stdev=15.69 00:39:50.768 clat (usec): min=5052, max=43453, avg=32101.83, stdev=1844.01 00:39:50.768 lat (usec): min=5063, max=43488, avg=32120.80, stdev=1844.04 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[25035], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:39:50.768 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.768 | 99.00th=[34341], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:39:50.768 | 99.99th=[43254] 00:39:50.768 bw ( KiB/s): min= 1920, max= 2108, per=4.15%, avg=1982.70, stdev=70.37, samples=20 00:39:50.768 iops : min= 480, max= 527, avg=495.45, stdev=17.80, samples=20 00:39:50.768 lat (msec) : 10=0.14%, 20=0.32%, 50=99.54% 00:39:50.768 cpu : usr=99.12%, sys=0.54%, ctx=17, majf=0, minf=29 00:39:50.768 IO depths : 1=6.0%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845089: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:39:50.768 slat (usec): min=5, max=126, avg=30.18, stdev=18.03 00:39:50.768 clat (usec): min=25103, max=39505, avg=32116.97, stdev=1014.19 00:39:50.768 lat (usec): min=25119, max=39522, avg=32147.15, stdev=1013.40 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[28181], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:39:50.768 | 99.00th=[35390], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:39:50.768 | 99.99th=[39584] 00:39:50.768 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1973.89, stdev=64.93, samples=19 00:39:50.768 iops : min= 480, max= 512, avg=493.47, stdev=16.23, samples=19 00:39:50.768 lat (msec) : 50=100.00% 00:39:50.768 cpu : usr=98.90%, sys=0.77%, ctx=15, majf=0, minf=25 00:39:50.768 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:50.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.768 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.768 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.768 filename2: (groupid=0, jobs=1): err= 0: pid=845090: Fri Jun 7 14:42:12 2024 00:39:50.768 read: IOPS=494, BW=1976KiB/s (2024kB/s)(19.3MiB/10007msec) 00:39:50.768 slat (usec): min=5, max=110, avg=24.55, stdev=18.25 00:39:50.768 clat (usec): min=7640, max=59419, avg=32191.90, stdev=3794.03 00:39:50.768 lat (usec): min=7646, max=59426, avg=32216.45, stdev=3794.28 00:39:50.768 clat percentiles (usec): 00:39:50.768 | 1.00th=[17433], 5.00th=[27657], 10.00th=[31065], 20.00th=[31589], 00:39:50.768 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:39:50.768 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[35914], 00:39:50.769 | 99.00th=[47449], 99.50th=[52167], 99.90th=[59507], 99.95th=[59507], 00:39:50.769 | 99.99th=[59507] 00:39:50.769 bw ( KiB/s): min= 1856, max= 2072, per=4.11%, avg=1964.63, stdev=66.47, samples=19 00:39:50.769 iops : min= 464, max= 518, avg=491.16, stdev=16.62, samples=19 00:39:50.769 lat (msec) : 10=0.12%, 20=1.15%, 50=97.96%, 100=0.77% 00:39:50.769 cpu : usr=99.08%, sys=0.59%, ctx=18, majf=0, minf=21 00:39:50.769 IO depths : 1=3.4%, 2=7.2%, 4=17.7%, 8=61.6%, 16=10.2%, 32=0.0%, >=64=0.0% 00:39:50.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.769 filename2: (groupid=0, jobs=1): err= 0: pid=845091: Fri Jun 7 14:42:12 2024 00:39:50.769 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10019msec) 00:39:50.769 slat (usec): min=5, max=121, avg=18.93, stdev=18.04 00:39:50.769 clat (usec): min=12248, max=47145, avg=32075.03, stdev=2142.40 00:39:50.769 lat (usec): min=12257, max=47153, avg=32093.96, stdev=2142.70 00:39:50.769 clat percentiles (usec): 00:39:50.769 | 1.00th=[24249], 5.00th=[30278], 10.00th=[31589], 20.00th=[31851], 00:39:50.769 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.769 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.769 | 99.00th=[39584], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:39:50.769 | 99.99th=[46924] 00:39:50.769 bw ( KiB/s): min= 1920, max= 2096, per=4.15%, avg=1984.50, stdev=71.32, samples=20 00:39:50.769 iops : min= 480, max= 524, avg=495.75, stdev=18.17, samples=20 00:39:50.769 lat (msec) : 20=0.42%, 50=99.58% 00:39:50.769 cpu : usr=98.96%, sys=0.71%, ctx=15, majf=0, minf=20 00:39:50.769 IO depths : 1=4.7%, 2=10.6%, 4=23.7%, 8=53.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:39:50.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.769 filename2: (groupid=0, jobs=1): err= 0: pid=845092: Fri Jun 7 14:42:12 2024 00:39:50.769 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10012msec) 00:39:50.769 slat (usec): min=5, max=141, avg=24.94, stdev=19.38 00:39:50.769 clat (usec): min=9881, max=58476, avg=31942.21, stdev=2647.42 00:39:50.769 lat (usec): min=9888, max=58487, avg=31967.16, stdev=2649.17 00:39:50.769 clat percentiles (usec): 00:39:50.769 | 1.00th=[19268], 5.00th=[28967], 10.00th=[31589], 20.00th=[31851], 00:39:50.769 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:39:50.769 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:39:50.769 | 99.00th=[39060], 99.50th=[39584], 99.90th=[58459], 99.95th=[58459], 00:39:50.769 | 99.99th=[58459] 00:39:50.769 bw ( KiB/s): min= 1920, max= 2180, per=4.16%, avg=1988.65, stdev=80.87, samples=20 00:39:50.769 iops : min= 480, max= 545, avg=496.90, stdev=20.44, samples=20 00:39:50.769 lat (msec) : 10=0.20%, 20=0.92%, 50=98.76%, 100=0.12% 00:39:50.769 cpu : usr=99.23%, sys=0.43%, ctx=16, majf=0, minf=23 00:39:50.769 IO depths : 1=5.5%, 2=11.2%, 4=23.2%, 8=53.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:50.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 complete : 0=0.0%, 4=93.6%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.769 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:50.769 00:39:50.769 Run status group 0 (all jobs): 00:39:50.769 READ: bw=46.6MiB/s (48.9MB/s), 1975KiB/s-2098KiB/s (2022kB/s-2148kB/s), io=467MiB (490MB), run=10001-10023msec 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 bdev_null0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 [2024-06-07 14:42:12.915139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 bdev_null1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:50.769 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:50.770 { 00:39:50.770 "params": { 00:39:50.770 "name": "Nvme$subsystem", 00:39:50.770 "trtype": "$TEST_TRANSPORT", 00:39:50.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.770 "adrfam": "ipv4", 00:39:50.770 "trsvcid": "$NVMF_PORT", 00:39:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.770 "hdgst": ${hdgst:-false}, 00:39:50.770 "ddgst": ${ddgst:-false} 00:39:50.770 }, 00:39:50.770 "method": "bdev_nvme_attach_controller" 00:39:50.770 } 00:39:50.770 EOF 00:39:50.770 )") 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:50.770 { 00:39:50.770 "params": { 00:39:50.770 "name": "Nvme$subsystem", 00:39:50.770 "trtype": "$TEST_TRANSPORT", 00:39:50.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:50.770 "adrfam": "ipv4", 00:39:50.770 "trsvcid": "$NVMF_PORT", 00:39:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:50.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:50.770 "hdgst": ${hdgst:-false}, 00:39:50.770 "ddgst": ${ddgst:-false} 00:39:50.770 }, 00:39:50.770 "method": "bdev_nvme_attach_controller" 00:39:50.770 } 00:39:50.770 EOF 00:39:50.770 )") 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:50.770 "params": { 00:39:50.770 "name": "Nvme0", 00:39:50.770 "trtype": "tcp", 00:39:50.770 "traddr": "10.0.0.2", 00:39:50.770 "adrfam": "ipv4", 00:39:50.770 "trsvcid": "4420", 00:39:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:50.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:50.770 "hdgst": false, 00:39:50.770 "ddgst": false 00:39:50.770 }, 00:39:50.770 "method": "bdev_nvme_attach_controller" 00:39:50.770 },{ 00:39:50.770 "params": { 00:39:50.770 "name": "Nvme1", 00:39:50.770 "trtype": "tcp", 00:39:50.770 "traddr": "10.0.0.2", 00:39:50.770 "adrfam": "ipv4", 00:39:50.770 "trsvcid": "4420", 00:39:50.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:50.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:50.770 "hdgst": false, 00:39:50.770 "ddgst": false 00:39:50.770 }, 00:39:50.770 "method": "bdev_nvme_attach_controller" 00:39:50.770 }' 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:50.770 14:42:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:50.770 14:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:50.770 14:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:50.770 14:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:50.770 14:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:50.770 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:50.770 ... 00:39:50.770 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:50.770 ... 00:39:50.770 fio-3.35 00:39:50.770 Starting 4 threads 00:39:50.770 EAL: No free 2048 kB hugepages reported on node 1 00:39:56.063 00:39:56.063 filename0: (groupid=0, jobs=1): err= 0: pid=847719: Fri Jun 7 14:42:18 2024 00:39:56.063 read: IOPS=2048, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5002msec) 00:39:56.063 slat (nsec): min=5649, max=51598, avg=6205.29, stdev=1704.72 00:39:56.063 clat (usec): min=1862, max=6737, avg=3887.83, stdev=701.76 00:39:56.063 lat (usec): min=1868, max=6768, avg=3894.03, stdev=701.78 00:39:56.063 clat percentiles (usec): 00:39:56.063 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3458], 00:39:56.063 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:39:56.063 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5473], 00:39:56.063 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6456], 99.95th=[ 6521], 00:39:56.063 | 99.99th=[ 6521] 00:39:56.063 bw ( KiB/s): min=16176, max=16576, per=24.33%, avg=16394.67, stdev=141.99, samples=9 00:39:56.063 iops : min= 2022, max= 2072, avg=2049.33, stdev=17.75, samples=9 00:39:56.063 lat (msec) : 2=0.03%, 4=77.88%, 10=22.09% 00:39:56.063 cpu : usr=97.44%, sys=2.22%, ctx=107, majf=0, minf=30 00:39:56.063 IO depths : 1=0.1%, 2=0.2%, 4=72.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 issued rwts: total=10247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:56.063 filename0: (groupid=0, jobs=1): err= 0: pid=847720: Fri Jun 7 14:42:18 2024 00:39:56.063 read: IOPS=2062, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:39:56.063 slat (nsec): min=5654, max=60697, avg=6181.09, stdev=1634.48 00:39:56.063 clat (usec): min=1766, max=7113, avg=3861.29, stdev=664.38 00:39:56.063 lat (usec): min=1773, max=7119, avg=3867.47, stdev=664.35 00:39:56.063 clat percentiles (usec): 00:39:56.063 | 1.00th=[ 2933], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3458], 00:39:56.063 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3720], 00:39:56.063 | 70.00th=[ 3785], 80.00th=[ 4015], 90.00th=[ 5211], 95.00th=[ 5473], 00:39:56.063 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6325], 99.95th=[ 6390], 00:39:56.063 | 99.99th=[ 7111] 00:39:56.063 bw ( KiB/s): min=16288, max=17488, per=24.54%, avg=16536.89, stdev=366.70, samples=9 00:39:56.063 iops : min= 2036, max= 2186, avg=2067.11, stdev=45.84, samples=9 00:39:56.063 lat (msec) : 2=0.06%, 4=79.68%, 10=20.26% 00:39:56.063 cpu : usr=97.64%, sys=2.10%, ctx=12, majf=0, minf=84 00:39:56.063 IO depths : 1=0.1%, 2=0.2%, 4=72.9%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 issued rwts: total=10316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:56.063 filename1: (groupid=0, jobs=1): err= 0: pid=847721: Fri Jun 7 14:42:18 2024 00:39:56.063 read: IOPS=2051, BW=16.0MiB/s (16.8MB/s)(80.2MiB/5001msec) 00:39:56.063 slat (nsec): min=5649, max=50165, avg=6210.42, stdev=1804.19 00:39:56.063 clat (usec): min=1282, max=6318, avg=3882.75, stdev=601.93 00:39:56.063 lat (usec): min=1287, max=6324, avg=3888.96, stdev=601.92 00:39:56.063 clat percentiles (usec): 00:39:56.063 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 3359], 20.00th=[ 3490], 00:39:56.063 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 3785], 00:39:56.063 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5342], 00:39:56.063 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 5997], 99.95th=[ 6259], 00:39:56.063 | 99.99th=[ 6325] 00:39:56.063 bw ( KiB/s): min=15328, max=17328, per=24.34%, avg=16405.33, stdev=597.06, samples=9 00:39:56.063 iops : min= 1916, max= 2166, avg=2050.67, stdev=74.63, samples=9 00:39:56.063 lat (msec) : 2=0.05%, 4=72.57%, 10=27.38% 00:39:56.063 cpu : usr=97.24%, sys=2.54%, ctx=7, majf=0, minf=32 00:39:56.063 IO depths : 1=0.4%, 2=1.4%, 4=69.7%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.063 issued rwts: total=10260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.063 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:56.063 filename1: (groupid=0, jobs=1): err= 0: pid=847722: Fri Jun 7 14:42:18 2024 00:39:56.063 read: IOPS=2263, BW=17.7MiB/s (18.5MB/s)(88.5MiB/5003msec) 00:39:56.063 slat (nsec): min=5659, max=56519, avg=6202.77, stdev=1735.35 00:39:56.063 clat (usec): min=1995, max=6521, avg=3520.25, stdev=408.16 00:39:56.063 lat (usec): min=2000, max=6527, avg=3526.46, stdev=408.17 00:39:56.063 clat percentiles (usec): 00:39:56.063 | 1.00th=[ 2474], 5.00th=[ 2802], 10.00th=[ 3032], 20.00th=[ 3228], 00:39:56.063 | 30.00th=[ 3392], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3589], 00:39:56.063 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3851], 95.00th=[ 4113], 00:39:56.063 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[ 6259], 00:39:56.063 | 99.99th=[ 6521] 00:39:56.063 bw ( KiB/s): min=17456, max=19008, per=26.81%, avg=18067.56, stdev=558.96, samples=9 00:39:56.064 iops : min= 2182, max= 2376, avg=2258.44, stdev=69.87, samples=9 00:39:56.064 lat (msec) : 2=0.01%, 4=93.33%, 10=6.66% 00:39:56.064 cpu : usr=97.28%, sys=2.38%, ctx=112, majf=0, minf=36 00:39:56.064 IO depths : 1=0.1%, 2=0.3%, 4=65.9%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:56.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.064 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.064 issued rwts: total=11322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.064 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:56.064 00:39:56.064 Run status group 0 (all jobs): 00:39:56.064 READ: bw=65.8MiB/s (69.0MB/s), 16.0MiB/s-17.7MiB/s (16.8MB/s-18.5MB/s), io=329MiB (345MB), run=5001-5003msec 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 00:39:56.064 real 0m24.159s 00:39:56.064 user 5m16.020s 00:39:56.064 sys 0m3.780s 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 ************************************ 00:39:56.064 END TEST fio_dif_rand_params 00:39:56.064 ************************************ 00:39:56.064 14:42:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:56.064 14:42:19 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:39:56.064 14:42:19 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 ************************************ 00:39:56.064 START TEST fio_dif_digest 00:39:56.064 ************************************ 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 bdev_null0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:56.064 [2024-06-07 14:42:19.209318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:56.064 { 00:39:56.064 "params": { 00:39:56.064 "name": "Nvme$subsystem", 00:39:56.064 "trtype": "$TEST_TRANSPORT", 00:39:56.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.064 "adrfam": "ipv4", 00:39:56.064 "trsvcid": "$NVMF_PORT", 00:39:56.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.064 "hdgst": ${hdgst:-false}, 00:39:56.064 "ddgst": ${ddgst:-false} 00:39:56.064 }, 00:39:56.064 "method": "bdev_nvme_attach_controller" 00:39:56.064 } 00:39:56.064 EOF 00:39:56.064 )") 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:56.064 14:42:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:56.064 "params": { 00:39:56.064 "name": "Nvme0", 00:39:56.064 "trtype": "tcp", 00:39:56.064 "traddr": "10.0.0.2", 00:39:56.064 "adrfam": "ipv4", 00:39:56.064 "trsvcid": "4420", 00:39:56.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.064 "hdgst": true, 00:39:56.065 "ddgst": true 00:39:56.065 }, 00:39:56.065 "method": "bdev_nvme_attach_controller" 00:39:56.065 }' 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:56.065 14:42:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:56.065 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:56.065 ... 00:39:56.065 fio-3.35 00:39:56.065 Starting 3 threads 00:39:56.065 EAL: No free 2048 kB hugepages reported on node 1 00:40:08.300 00:40:08.300 filename0: (groupid=0, jobs=1): err= 0: pid=849202: Fri Jun 7 14:42:30 2024 00:40:08.300 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10008msec) 00:40:08.300 slat (nsec): min=5985, max=35144, avg=7748.44, stdev=1770.65 00:40:08.300 clat (usec): min=7993, max=16682, avg=13151.75, stdev=1129.39 00:40:08.300 lat (usec): min=8002, max=16691, avg=13159.50, stdev=1129.25 00:40:08.300 clat percentiles (usec): 00:40:08.300 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[11863], 20.00th=[12387], 00:40:08.300 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:40:08.300 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:40:08.300 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16057], 99.95th=[16188], 00:40:08.300 | 99.99th=[16712] 00:40:08.300 bw ( KiB/s): min=28416, max=29952, per=34.95%, avg=29158.40, stdev=454.17, samples=20 00:40:08.300 iops : min= 222, max= 234, avg=227.80, stdev= 3.55, samples=20 00:40:08.300 lat (msec) : 10=1.84%, 20=98.16% 00:40:08.300 cpu : usr=95.50%, sys=4.29%, ctx=20, majf=0, minf=108 00:40:08.300 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.300 filename0: (groupid=0, jobs=1): err= 0: pid=849203: Fri Jun 7 14:42:30 2024 00:40:08.300 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(271MiB/10046msec) 00:40:08.300 slat (nsec): min=6011, max=34641, avg=7601.80, stdev=1788.93 00:40:08.300 clat (usec): min=8953, max=55465, avg=13890.78, stdev=2687.93 00:40:08.300 lat (usec): min=8960, max=55500, avg=13898.38, stdev=2688.13 00:40:08.300 clat percentiles (usec): 00:40:08.300 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12387], 20.00th=[12911], 00:40:08.300 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[13960], 00:40:08.300 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:40:08.300 | 99.00th=[16319], 99.50th=[16909], 99.90th=[55313], 99.95th=[55313], 00:40:08.300 | 99.99th=[55313] 00:40:08.300 bw ( KiB/s): min=25088, max=28928, per=33.18%, avg=27686.40, stdev=933.33, samples=20 00:40:08.300 iops : min= 196, max= 226, avg=216.30, stdev= 7.29, samples=20 00:40:08.300 lat (msec) : 10=0.60%, 20=99.03%, 50=0.05%, 100=0.32% 00:40:08.300 cpu : usr=95.73%, sys=4.05%, ctx=20, majf=0, minf=108 00:40:08.300 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.300 filename0: (groupid=0, jobs=1): err= 0: pid=849205: Fri Jun 7 14:42:30 2024 00:40:08.300 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(263MiB/10003msec) 00:40:08.300 slat (nsec): min=6009, max=31679, avg=7762.97, stdev=1779.77 00:40:08.300 clat (usec): min=8862, max=56202, avg=14267.09, stdev=2883.70 00:40:08.300 lat (usec): min=8876, max=56228, avg=14274.85, stdev=2883.80 00:40:08.300 clat percentiles (usec): 00:40:08.300 | 1.00th=[10945], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:40:08.300 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:40:08.300 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:40:08.300 | 99.00th=[17171], 99.50th=[17957], 99.90th=[55313], 99.95th=[55313], 00:40:08.300 | 99.99th=[56361] 00:40:08.300 bw ( KiB/s): min=24576, max=28416, per=32.20%, avg=26867.20, stdev=1015.46, samples=20 00:40:08.300 iops : min= 192, max= 222, avg=209.90, stdev= 7.93, samples=20 00:40:08.300 lat (msec) : 10=0.38%, 20=99.19%, 100=0.43% 00:40:08.300 cpu : usr=95.86%, sys=3.92%, ctx=24, majf=0, minf=162 00:40:08.300 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:08.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:08.300 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:08.300 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:08.300 00:40:08.300 Run status group 0 (all jobs): 00:40:08.300 READ: bw=81.5MiB/s (85.4MB/s), 26.3MiB/s-28.5MiB/s (27.5MB/s-29.9MB/s), io=819MiB (858MB), run=10003-10046msec 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:08.300 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:08.300 00:40:08.300 real 0m11.188s 00:40:08.300 user 0m43.468s 00:40:08.300 sys 0m1.561s 00:40:08.301 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:08.301 14:42:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:08.301 ************************************ 00:40:08.301 END TEST fio_dif_digest 00:40:08.301 ************************************ 00:40:08.301 14:42:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:08.301 14:42:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:08.301 rmmod nvme_tcp 00:40:08.301 rmmod nvme_fabrics 00:40:08.301 rmmod nvme_keyring 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 838522 ']' 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 838522 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 838522 ']' 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 838522 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@954 -- # uname 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 838522 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 838522' 00:40:08.301 killing process with pid 838522 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@968 -- # kill 838522 00:40:08.301 14:42:30 nvmf_dif -- common/autotest_common.sh@973 -- # wait 838522 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:08.301 14:42:30 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:10.847 Waiting for block devices as requested 00:40:10.847 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:10.847 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:11.108 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:11.108 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:11.108 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:11.108 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:11.369 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:11.369 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:11.369 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:11.629 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:11.629 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:11.629 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:11.890 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:11.890 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:11.890 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:11.890 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:12.152 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:12.152 14:42:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:12.152 14:42:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:12.152 14:42:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:12.152 14:42:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:12.152 14:42:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.152 14:42:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:12.152 14:42:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.063 14:42:37 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:14.063 00:40:14.063 real 1m17.774s 00:40:14.063 user 7m55.439s 00:40:14.063 sys 0m20.285s 00:40:14.063 14:42:37 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:14.063 14:42:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:14.063 ************************************ 00:40:14.063 END TEST nvmf_dif 00:40:14.063 ************************************ 00:40:14.063 14:42:37 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:14.063 14:42:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:14.063 14:42:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:14.063 14:42:37 -- common/autotest_common.sh@10 -- # set +x 00:40:14.324 ************************************ 00:40:14.324 START TEST nvmf_abort_qd_sizes 00:40:14.324 ************************************ 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:14.324 * Looking for test storage... 00:40:14.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:40:14.324 14:42:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:22.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:22.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:22.494 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:22.495 Found net devices under 0000:31:00.0: cvl_0_0 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:22.495 Found net devices under 0000:31:00.1: cvl_0_1 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:22.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:22.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:40:22.495 00:40:22.495 --- 10.0.0.2 ping statistics --- 00:40:22.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.495 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:22.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:22.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:40:22.495 00:40:22.495 --- 10.0.0.1 ping statistics --- 00:40:22.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:22.495 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:40:22.495 14:42:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:25.790 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:25.790 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:25.790 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:25.791 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:26.051 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:26.051 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:26.051 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=859268 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 859268 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 859268 ']' 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:26.052 14:42:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:26.312 [2024-06-07 14:42:49.703687] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:40:26.312 [2024-06-07 14:42:49.703740] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.312 EAL: No free 2048 kB hugepages reported on node 1 00:40:26.312 [2024-06-07 14:42:49.777861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:26.312 [2024-06-07 14:42:49.816646] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.312 [2024-06-07 14:42:49.816683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.312 [2024-06-07 14:42:49.816690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.312 [2024-06-07 14:42:49.816697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.312 [2024-06-07 14:42:49.816702] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.312 [2024-06-07 14:42:49.816840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.312 [2024-06-07 14:42:49.816954] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.312 [2024-06-07 14:42:49.817111] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.312 [2024-06-07 14:42:49.817112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:40:26.882 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:27.142 14:42:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:27.142 ************************************ 00:40:27.142 START TEST spdk_target_abort 00:40:27.142 ************************************ 00:40:27.142 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 00:40:27.142 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:27.142 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:40:27.142 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:27.142 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.403 spdk_targetn1 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.403 [2024-06-07 14:42:50.886245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:27.403 [2024-06-07 14:42:50.926511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:27.403 14:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:27.403 EAL: No free 2048 kB hugepages reported on node 1 00:40:27.663 [2024-06-07 14:42:51.166687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:296 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.166712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0026 p:1 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.183543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:920 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.183561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0074 p:1 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.189632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1056 len:8 PRP1 0x2000078be000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.189650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0086 p:1 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.256568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1944 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.256585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.266991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2384 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.267006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.281626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2864 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.281642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:40:27.663 [2024-06-07 14:42:51.284245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3056 len:8 PRP1 0x2000078be000 PRP2 0x0 00:40:27.663 [2024-06-07 14:42:51.284259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:27.922 [2024-06-07 14:42:51.313712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:4008 len:8 PRP1 0x2000078be000 PRP2 0x0 00:40:27.922 [2024-06-07 14:42:51.313729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:40:31.216 Initializing NVMe Controllers 00:40:31.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:31.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:31.216 Initialization complete. Launching workers. 00:40:31.216 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11890, failed: 8 00:40:31.216 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3170, failed to submit 8728 00:40:31.216 success 754, unsuccess 2416, failed 0 00:40:31.216 14:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:31.216 14:42:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:31.216 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.216 [2024-06-07 14:42:54.434507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:488 len:8 PRP1 0x200007c50000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.434547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:680 len:8 PRP1 0x200007c58000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.450424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.505367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2080 len:8 PRP1 0x200007c52000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.505392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.529340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2576 len:8 PRP1 0x200007c50000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.529363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.545243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:3000 len:8 PRP1 0x200007c5e000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.545270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.561278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3344 len:8 PRP1 0x200007c52000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.561308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:40:31.216 [2024-06-07 14:42:54.569287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:3528 len:8 PRP1 0x200007c3c000 PRP2 0x0 00:40:31.216 [2024-06-07 14:42:54.569308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00c5 p:0 m:0 dnr:0 00:40:31.476 [2024-06-07 14:42:55.049140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:14432 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:40:31.476 [2024-06-07 14:42:55.049171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:40:34.016 Initializing NVMe Controllers 00:40:34.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:34.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:34.016 Initialization complete. Launching workers. 00:40:34.016 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8759, failed: 8 00:40:34.016 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7549 00:40:34.016 success 338, unsuccess 880, failed 0 00:40:34.016 14:42:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:34.016 14:42:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:34.016 EAL: No free 2048 kB hugepages reported on node 1 00:40:37.311 Initializing NVMe Controllers 00:40:37.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:37.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:37.311 Initialization complete. Launching workers. 00:40:37.311 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42185, failed: 0 00:40:37.311 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2518, failed to submit 39667 00:40:37.311 success 589, unsuccess 1929, failed 0 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:37.311 14:43:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 859268 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 859268 ']' 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 859268 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 859268 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 859268' 00:40:39.221 killing process with pid 859268 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 859268 00:40:39.221 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 859268 00:40:39.481 00:40:39.481 real 0m12.368s 00:40:39.481 user 0m50.571s 00:40:39.481 sys 0m1.707s 00:40:39.481 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:39.481 14:43:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:39.481 ************************************ 00:40:39.481 END TEST spdk_target_abort 00:40:39.481 ************************************ 00:40:39.481 14:43:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:39.481 14:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:39.481 14:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:39.481 14:43:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:39.481 ************************************ 00:40:39.481 START TEST kernel_target_abort 00:40:39.481 ************************************ 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:39.481 14:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:43.709 Waiting for block devices as requested 00:40:43.709 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:43.709 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:43.709 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:43.970 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:43.970 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:43.970 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:43.970 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:44.231 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:44.231 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:44.231 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:40:44.231 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:44.492 No valid GPT data, bailing 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:40:44.492 00:40:44.492 Discovery Log Number of Records 2, Generation counter 2 00:40:44.492 =====Discovery Log Entry 0====== 00:40:44.492 trtype: tcp 00:40:44.492 adrfam: ipv4 00:40:44.492 subtype: current discovery subsystem 00:40:44.492 treq: not specified, sq flow control disable supported 00:40:44.492 portid: 1 00:40:44.492 trsvcid: 4420 00:40:44.492 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:44.492 traddr: 10.0.0.1 00:40:44.492 eflags: none 00:40:44.492 sectype: none 00:40:44.492 =====Discovery Log Entry 1====== 00:40:44.492 trtype: tcp 00:40:44.492 adrfam: ipv4 00:40:44.492 subtype: nvme subsystem 00:40:44.492 treq: not specified, sq flow control disable supported 00:40:44.492 portid: 1 00:40:44.492 trsvcid: 4420 00:40:44.492 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:44.492 traddr: 10.0.0.1 00:40:44.492 eflags: none 00:40:44.492 sectype: none 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:44.492 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:44.493 14:43:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:44.493 EAL: No free 2048 kB hugepages reported on node 1 00:40:47.790 Initializing NVMe Controllers 00:40:47.790 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:47.790 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:47.790 Initialization complete. Launching workers. 00:40:47.790 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66260, failed: 0 00:40:47.790 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66260, failed to submit 0 00:40:47.790 success 0, unsuccess 66260, failed 0 00:40:47.790 14:43:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:47.790 14:43:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:47.790 EAL: No free 2048 kB hugepages reported on node 1 00:40:51.085 Initializing NVMe Controllers 00:40:51.085 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:51.085 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:51.085 Initialization complete. Launching workers. 00:40:51.085 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107894, failed: 0 00:40:51.085 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27178, failed to submit 80716 00:40:51.085 success 0, unsuccess 27178, failed 0 00:40:51.085 14:43:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:51.085 14:43:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:51.085 EAL: No free 2048 kB hugepages reported on node 1 00:40:53.622 Initializing NVMe Controllers 00:40:53.622 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:53.622 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:53.622 Initialization complete. Launching workers. 00:40:53.622 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103775, failed: 0 00:40:53.622 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25946, failed to submit 77829 00:40:53.622 success 0, unsuccess 25946, failed 0 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:40:53.622 14:43:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:57.822 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:57.822 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:59.204 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:59.464 00:40:59.464 real 0m19.852s 00:40:59.464 user 0m9.671s 00:40:59.464 sys 0m5.883s 00:40:59.464 14:43:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:59.464 14:43:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:59.464 ************************************ 00:40:59.464 END TEST kernel_target_abort 00:40:59.465 ************************************ 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:59.465 rmmod nvme_tcp 00:40:59.465 rmmod nvme_fabrics 00:40:59.465 rmmod nvme_keyring 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 859268 ']' 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 859268 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 859268 ']' 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 859268 00:40:59.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (859268) - No such process 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 859268 is not found' 00:40:59.465 Process with pid 859268 is not found 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:40:59.465 14:43:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:03.669 Waiting for block devices as requested 00:41:03.669 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:03.669 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:41:03.929 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:41:03.929 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:41:03.929 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:41:04.189 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:41:04.189 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:41:04.189 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:41:04.189 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:41:04.450 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:04.450 14:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:06.361 14:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:06.361 00:41:06.361 real 0m52.269s 00:41:06.361 user 1m5.754s 00:41:06.361 sys 0m18.775s 00:41:06.361 14:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:06.361 14:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:06.361 ************************************ 00:41:06.361 END TEST nvmf_abort_qd_sizes 00:41:06.361 ************************************ 00:41:06.621 14:43:30 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:06.621 14:43:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:06.621 14:43:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:06.621 14:43:30 -- common/autotest_common.sh@10 -- # set +x 00:41:06.621 ************************************ 00:41:06.622 START TEST keyring_file 00:41:06.622 ************************************ 00:41:06.622 14:43:30 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:41:06.622 * Looking for test storage... 00:41:06.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:06.622 14:43:30 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:06.622 14:43:30 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:06.622 14:43:30 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:06.622 14:43:30 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.622 14:43:30 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.622 14:43:30 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.622 14:43:30 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:06.622 14:43:30 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@47 -- # : 0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WDa8na3IuI 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WDa8na3IuI 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WDa8na3IuI 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WDa8na3IuI 00:41:06.622 14:43:30 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kl2oflo8R1 00:41:06.622 14:43:30 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:06.622 14:43:30 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:06.946 14:43:30 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kl2oflo8R1 00:41:06.946 14:43:30 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kl2oflo8R1 00:41:06.946 14:43:30 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kl2oflo8R1 00:41:06.946 14:43:30 keyring_file -- keyring/file.sh@30 -- # tgtpid=869526 00:41:06.946 14:43:30 keyring_file -- keyring/file.sh@32 -- # waitforlisten 869526 00:41:06.946 14:43:30 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 869526 ']' 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:06.946 14:43:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:06.946 [2024-06-07 14:43:30.370505] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:41:06.946 [2024-06-07 14:43:30.370581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869526 ] 00:41:06.946 EAL: No free 2048 kB hugepages reported on node 1 00:41:06.946 [2024-06-07 14:43:30.440600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.946 [2024-06-07 14:43:30.482209] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.517 14:43:31 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:07.517 14:43:31 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:41:07.517 14:43:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:07.517 14:43:31 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.517 14:43:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:07.517 [2024-06-07 14:43:31.145409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.778 null0 00:41:07.778 [2024-06-07 14:43:31.177452] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:07.778 [2024-06-07 14:43:31.177751] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:07.778 [2024-06-07 14:43:31.185469] tcp.c:3670:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:07.778 14:43:31 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:07.778 [2024-06-07 14:43:31.197501] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:07.778 request: 00:41:07.778 { 00:41:07.778 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.778 "secure_channel": false, 00:41:07.778 "listen_address": { 00:41:07.778 "trtype": "tcp", 00:41:07.778 "traddr": "127.0.0.1", 00:41:07.778 "trsvcid": "4420" 00:41:07.778 }, 00:41:07.778 "method": "nvmf_subsystem_add_listener", 00:41:07.778 "req_id": 1 00:41:07.778 } 00:41:07.778 Got JSON-RPC error response 00:41:07.778 response: 00:41:07.778 { 00:41:07.778 "code": -32602, 00:41:07.778 "message": "Invalid parameters" 00:41:07.778 } 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:07.778 14:43:31 keyring_file -- keyring/file.sh@46 -- # bperfpid=869839 00:41:07.778 14:43:31 keyring_file -- keyring/file.sh@48 -- # waitforlisten 869839 /var/tmp/bperf.sock 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 869839 ']' 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:07.778 14:43:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:07.778 14:43:31 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:07.778 [2024-06-07 14:43:31.256901] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:41:07.778 [2024-06-07 14:43:31.256950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869839 ] 00:41:07.778 EAL: No free 2048 kB hugepages reported on node 1 00:41:07.778 [2024-06-07 14:43:31.338447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.778 [2024-06-07 14:43:31.369501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.717 14:43:31 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:08.717 14:43:31 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:41:08.717 14:43:31 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:08.717 14:43:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:08.717 14:43:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kl2oflo8R1 00:41:08.717 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kl2oflo8R1 00:41:08.717 14:43:32 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:41:08.717 14:43:32 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:41:08.717 14:43:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:08.717 14:43:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:08.717 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.977 14:43:32 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.WDa8na3IuI == \/\t\m\p\/\t\m\p\.\W\D\a\8\n\a\3\I\u\I ]] 00:41:08.977 14:43:32 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:41:08.977 14:43:32 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:08.977 14:43:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:08.977 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:08.977 14:43:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:09.237 14:43:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kl2oflo8R1 == \/\t\m\p\/\t\m\p\.\k\l\2\o\f\l\o\8\R\1 ]] 00:41:09.237 14:43:32 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:41:09.237 14:43:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:09.237 14:43:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:09.237 14:43:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:09.237 14:43:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:09.237 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:09.237 14:43:32 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:41:09.238 14:43:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:41:09.238 14:43:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:09.238 14:43:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:09.238 14:43:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:09.238 14:43:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:09.238 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:09.498 14:43:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:09.498 14:43:32 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:09.498 14:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:09.498 [2024-06-07 14:43:33.103362] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:09.757 nvme0n1 00:41:09.757 14:43:33 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:09.757 14:43:33 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:41:09.757 14:43:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:09.757 14:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:10.017 14:43:33 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:41:10.017 14:43:33 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:10.017 Running I/O for 1 seconds... 00:41:10.958 00:41:10.958 Latency(us) 00:41:10.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:10.958 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:10.958 nvme0n1 : 1.00 13942.00 54.46 0.00 0.00 9155.79 4341.76 20206.93 00:41:10.958 =================================================================================================================== 00:41:10.958 Total : 13942.00 54.46 0.00 0.00 9155.79 4341.76 20206.93 00:41:11.219 0 00:41:11.219 14:43:34 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:11.219 14:43:34 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:11.219 14:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:11.480 14:43:34 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:41:11.481 14:43:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:41:11.481 14:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:11.481 14:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:11.481 14:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:11.481 14:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:11.481 14:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:11.481 14:43:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:11.481 14:43:35 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:11.481 14:43:35 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:11.481 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:11.742 [2024-06-07 14:43:35.251668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:11.742 [2024-06-07 14:43:35.252424] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054720 (107): Transport endpoint is not connected 00:41:11.742 [2024-06-07 14:43:35.253420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2054720 (9): Bad file descriptor 00:41:11.742 [2024-06-07 14:43:35.254422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:11.742 [2024-06-07 14:43:35.254428] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:11.742 [2024-06-07 14:43:35.254433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:11.742 request: 00:41:11.742 { 00:41:11.742 "name": "nvme0", 00:41:11.742 "trtype": "tcp", 00:41:11.742 "traddr": "127.0.0.1", 00:41:11.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:11.742 "adrfam": "ipv4", 00:41:11.742 "trsvcid": "4420", 00:41:11.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:11.742 "psk": "key1", 00:41:11.742 "method": "bdev_nvme_attach_controller", 00:41:11.742 "req_id": 1 00:41:11.742 } 00:41:11.742 Got JSON-RPC error response 00:41:11.742 response: 00:41:11.742 { 00:41:11.742 "code": -5, 00:41:11.742 "message": "Input/output error" 00:41:11.742 } 00:41:11.742 14:43:35 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:41:11.742 14:43:35 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:11.742 14:43:35 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:11.742 14:43:35 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:11.742 14:43:35 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:41:11.742 14:43:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:11.742 14:43:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:11.742 14:43:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:11.742 14:43:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:11.742 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.003 14:43:35 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:41:12.003 14:43:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.003 14:43:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:12.003 14:43:35 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:41:12.003 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:12.264 14:43:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:41:12.264 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:12.264 14:43:35 keyring_file -- keyring/file.sh@77 -- # jq length 00:41:12.264 14:43:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:41:12.265 14:43:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.525 14:43:36 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:41:12.525 14:43:36 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.WDa8na3IuI 00:41:12.525 14:43:36 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:12.525 14:43:36 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.525 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.786 [2024-06-07 14:43:36.187197] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WDa8na3IuI': 0100660 00:41:12.786 [2024-06-07 14:43:36.187214] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:12.786 request: 00:41:12.786 { 00:41:12.786 "name": "key0", 00:41:12.786 "path": "/tmp/tmp.WDa8na3IuI", 00:41:12.786 "method": "keyring_file_add_key", 00:41:12.786 "req_id": 1 00:41:12.786 } 00:41:12.786 Got JSON-RPC error response 00:41:12.786 response: 00:41:12.786 { 00:41:12.786 "code": -1, 00:41:12.786 "message": "Operation not permitted" 00:41:12.786 } 00:41:12.786 14:43:36 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:41:12.786 14:43:36 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:12.786 14:43:36 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:12.786 14:43:36 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:12.786 14:43:36 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.WDa8na3IuI 00:41:12.786 14:43:36 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WDa8na3IuI 00:41:12.786 14:43:36 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.WDa8na3IuI 00:41:12.786 14:43:36 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:12.786 14:43:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:13.047 14:43:36 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:41:13.047 14:43:36 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.047 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.047 [2024-06-07 14:43:36.648350] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WDa8na3IuI': No such file or directory 00:41:13.047 [2024-06-07 14:43:36.648363] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:13.047 [2024-06-07 14:43:36.648384] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:13.047 [2024-06-07 14:43:36.648389] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:13.047 [2024-06-07 14:43:36.648393] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:13.047 request: 00:41:13.047 { 00:41:13.047 "name": "nvme0", 00:41:13.047 "trtype": "tcp", 00:41:13.047 "traddr": "127.0.0.1", 00:41:13.047 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:13.047 "adrfam": "ipv4", 00:41:13.047 "trsvcid": "4420", 00:41:13.047 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:13.047 "psk": "key0", 00:41:13.047 "method": "bdev_nvme_attach_controller", 00:41:13.047 "req_id": 1 00:41:13.047 } 00:41:13.047 Got JSON-RPC error response 00:41:13.047 response: 00:41:13.047 { 00:41:13.047 "code": -19, 00:41:13.047 "message": "No such device" 00:41:13.047 } 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:13.047 14:43:36 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:13.047 14:43:36 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:41:13.047 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:13.308 14:43:36 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.K93wfdOgo2 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:41:13.308 14:43:36 keyring_file -- nvmf/common.sh@705 -- # python - 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.K93wfdOgo2 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.K93wfdOgo2 00:41:13.308 14:43:36 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.K93wfdOgo2 00:41:13.308 14:43:36 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K93wfdOgo2 00:41:13.308 14:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K93wfdOgo2 00:41:13.568 14:43:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.568 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:13.829 nvme0n1 00:41:13.829 14:43:37 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:13.829 14:43:37 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:41:13.829 14:43:37 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:41:13.829 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:14.221 14:43:37 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:41:14.221 14:43:37 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:14.221 14:43:37 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:41:14.221 14:43:37 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:14.221 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.483 14:43:37 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:41:14.483 14:43:37 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:14.483 14:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:14.483 14:43:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:41:14.483 14:43:38 keyring_file -- keyring/file.sh@104 -- # jq length 00:41:14.483 14:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:14.743 14:43:38 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:41:14.743 14:43:38 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.K93wfdOgo2 00:41:14.743 14:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.K93wfdOgo2 00:41:14.743 14:43:38 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kl2oflo8R1 00:41:14.744 14:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kl2oflo8R1 00:41:15.003 14:43:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:15.003 14:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:15.263 nvme0n1 00:41:15.263 14:43:38 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:41:15.263 14:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:15.523 14:43:38 keyring_file -- keyring/file.sh@112 -- # config='{ 00:41:15.523 "subsystems": [ 00:41:15.523 { 00:41:15.523 "subsystem": "keyring", 00:41:15.523 "config": [ 00:41:15.523 { 00:41:15.523 "method": "keyring_file_add_key", 00:41:15.523 "params": { 00:41:15.523 "name": "key0", 00:41:15.523 "path": "/tmp/tmp.K93wfdOgo2" 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "keyring_file_add_key", 00:41:15.523 "params": { 00:41:15.523 "name": "key1", 00:41:15.523 "path": "/tmp/tmp.kl2oflo8R1" 00:41:15.523 } 00:41:15.523 } 00:41:15.523 ] 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "subsystem": "iobuf", 00:41:15.523 "config": [ 00:41:15.523 { 00:41:15.523 "method": "iobuf_set_options", 00:41:15.523 "params": { 00:41:15.523 "small_pool_count": 8192, 00:41:15.523 "large_pool_count": 1024, 00:41:15.523 "small_bufsize": 8192, 00:41:15.523 "large_bufsize": 135168 00:41:15.523 } 00:41:15.523 } 00:41:15.523 ] 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "subsystem": "sock", 00:41:15.523 "config": [ 00:41:15.523 { 00:41:15.523 "method": "sock_set_default_impl", 00:41:15.523 "params": { 00:41:15.523 "impl_name": "posix" 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "sock_impl_set_options", 00:41:15.523 "params": { 00:41:15.523 "impl_name": "ssl", 00:41:15.523 "recv_buf_size": 4096, 00:41:15.523 "send_buf_size": 4096, 00:41:15.523 "enable_recv_pipe": true, 00:41:15.523 "enable_quickack": false, 00:41:15.523 "enable_placement_id": 0, 00:41:15.523 "enable_zerocopy_send_server": true, 00:41:15.523 "enable_zerocopy_send_client": false, 00:41:15.523 "zerocopy_threshold": 0, 00:41:15.523 "tls_version": 0, 00:41:15.523 "enable_ktls": false 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "sock_impl_set_options", 00:41:15.523 "params": { 00:41:15.523 "impl_name": "posix", 00:41:15.523 "recv_buf_size": 2097152, 00:41:15.523 "send_buf_size": 2097152, 00:41:15.523 "enable_recv_pipe": true, 00:41:15.523 "enable_quickack": false, 00:41:15.523 "enable_placement_id": 0, 00:41:15.523 "enable_zerocopy_send_server": true, 00:41:15.523 "enable_zerocopy_send_client": false, 00:41:15.523 "zerocopy_threshold": 0, 00:41:15.523 "tls_version": 0, 00:41:15.523 "enable_ktls": false 00:41:15.523 } 00:41:15.523 } 00:41:15.523 ] 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "subsystem": "vmd", 00:41:15.523 "config": [] 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "subsystem": "accel", 00:41:15.523 "config": [ 00:41:15.523 { 00:41:15.523 "method": "accel_set_options", 00:41:15.523 "params": { 00:41:15.523 "small_cache_size": 128, 00:41:15.523 "large_cache_size": 16, 00:41:15.523 "task_count": 2048, 00:41:15.523 "sequence_count": 2048, 00:41:15.523 "buf_count": 2048 00:41:15.523 } 00:41:15.523 } 00:41:15.523 ] 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "subsystem": "bdev", 00:41:15.523 "config": [ 00:41:15.523 { 00:41:15.523 "method": "bdev_set_options", 00:41:15.523 "params": { 00:41:15.523 "bdev_io_pool_size": 65535, 00:41:15.523 "bdev_io_cache_size": 256, 00:41:15.523 "bdev_auto_examine": true, 00:41:15.523 "iobuf_small_cache_size": 128, 00:41:15.523 "iobuf_large_cache_size": 16 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "bdev_raid_set_options", 00:41:15.523 "params": { 00:41:15.523 "process_window_size_kb": 1024 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "bdev_iscsi_set_options", 00:41:15.523 "params": { 00:41:15.523 "timeout_sec": 30 00:41:15.523 } 00:41:15.523 }, 00:41:15.523 { 00:41:15.523 "method": "bdev_nvme_set_options", 00:41:15.523 "params": { 00:41:15.523 "action_on_timeout": "none", 00:41:15.523 "timeout_us": 0, 00:41:15.523 "timeout_admin_us": 0, 00:41:15.523 "keep_alive_timeout_ms": 10000, 00:41:15.524 "arbitration_burst": 0, 00:41:15.524 "low_priority_weight": 0, 00:41:15.524 "medium_priority_weight": 0, 00:41:15.524 "high_priority_weight": 0, 00:41:15.524 "nvme_adminq_poll_period_us": 10000, 00:41:15.524 "nvme_ioq_poll_period_us": 0, 00:41:15.524 "io_queue_requests": 512, 00:41:15.524 "delay_cmd_submit": true, 00:41:15.524 "transport_retry_count": 4, 00:41:15.524 "bdev_retry_count": 3, 00:41:15.524 "transport_ack_timeout": 0, 00:41:15.524 "ctrlr_loss_timeout_sec": 0, 00:41:15.524 "reconnect_delay_sec": 0, 00:41:15.524 "fast_io_fail_timeout_sec": 0, 00:41:15.524 "disable_auto_failback": false, 00:41:15.524 "generate_uuids": false, 00:41:15.524 "transport_tos": 0, 00:41:15.524 "nvme_error_stat": false, 00:41:15.524 "rdma_srq_size": 0, 00:41:15.524 "io_path_stat": false, 00:41:15.524 "allow_accel_sequence": false, 00:41:15.524 "rdma_max_cq_size": 0, 00:41:15.524 "rdma_cm_event_timeout_ms": 0, 00:41:15.524 "dhchap_digests": [ 00:41:15.524 "sha256", 00:41:15.524 "sha384", 00:41:15.524 "sha512" 00:41:15.524 ], 00:41:15.524 "dhchap_dhgroups": [ 00:41:15.524 "null", 00:41:15.524 "ffdhe2048", 00:41:15.524 "ffdhe3072", 00:41:15.524 "ffdhe4096", 00:41:15.524 "ffdhe6144", 00:41:15.524 "ffdhe8192" 00:41:15.524 ] 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "bdev_nvme_attach_controller", 00:41:15.524 "params": { 00:41:15.524 "name": "nvme0", 00:41:15.524 "trtype": "TCP", 00:41:15.524 "adrfam": "IPv4", 00:41:15.524 "traddr": "127.0.0.1", 00:41:15.524 "trsvcid": "4420", 00:41:15.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:15.524 "prchk_reftag": false, 00:41:15.524 "prchk_guard": false, 00:41:15.524 "ctrlr_loss_timeout_sec": 0, 00:41:15.524 "reconnect_delay_sec": 0, 00:41:15.524 "fast_io_fail_timeout_sec": 0, 00:41:15.524 "psk": "key0", 00:41:15.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:15.524 "hdgst": false, 00:41:15.524 "ddgst": false 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "bdev_nvme_set_hotplug", 00:41:15.524 "params": { 00:41:15.524 "period_us": 100000, 00:41:15.524 "enable": false 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "bdev_wait_for_examine" 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "nbd", 00:41:15.524 "config": [] 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }' 00:41:15.524 14:43:38 keyring_file -- keyring/file.sh@114 -- # killprocess 869839 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 869839 ']' 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@953 -- # kill -0 869839 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@954 -- # uname 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 869839 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 869839' 00:41:15.524 killing process with pid 869839 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@968 -- # kill 869839 00:41:15.524 Received shutdown signal, test time was about 1.000000 seconds 00:41:15.524 00:41:15.524 Latency(us) 00:41:15.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.524 =================================================================================================================== 00:41:15.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:15.524 14:43:38 keyring_file -- common/autotest_common.sh@973 -- # wait 869839 00:41:15.524 14:43:39 keyring_file -- keyring/file.sh@117 -- # bperfpid=871341 00:41:15.524 14:43:39 keyring_file -- keyring/file.sh@119 -- # waitforlisten 871341 /var/tmp/bperf.sock 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 871341 ']' 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:15.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:15.524 14:43:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:15.524 14:43:39 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:15.524 14:43:39 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:41:15.524 "subsystems": [ 00:41:15.524 { 00:41:15.524 "subsystem": "keyring", 00:41:15.524 "config": [ 00:41:15.524 { 00:41:15.524 "method": "keyring_file_add_key", 00:41:15.524 "params": { 00:41:15.524 "name": "key0", 00:41:15.524 "path": "/tmp/tmp.K93wfdOgo2" 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "keyring_file_add_key", 00:41:15.524 "params": { 00:41:15.524 "name": "key1", 00:41:15.524 "path": "/tmp/tmp.kl2oflo8R1" 00:41:15.524 } 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "iobuf", 00:41:15.524 "config": [ 00:41:15.524 { 00:41:15.524 "method": "iobuf_set_options", 00:41:15.524 "params": { 00:41:15.524 "small_pool_count": 8192, 00:41:15.524 "large_pool_count": 1024, 00:41:15.524 "small_bufsize": 8192, 00:41:15.524 "large_bufsize": 135168 00:41:15.524 } 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "sock", 00:41:15.524 "config": [ 00:41:15.524 { 00:41:15.524 "method": "sock_set_default_impl", 00:41:15.524 "params": { 00:41:15.524 "impl_name": "posix" 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "sock_impl_set_options", 00:41:15.524 "params": { 00:41:15.524 "impl_name": "ssl", 00:41:15.524 "recv_buf_size": 4096, 00:41:15.524 "send_buf_size": 4096, 00:41:15.524 "enable_recv_pipe": true, 00:41:15.524 "enable_quickack": false, 00:41:15.524 "enable_placement_id": 0, 00:41:15.524 "enable_zerocopy_send_server": true, 00:41:15.524 "enable_zerocopy_send_client": false, 00:41:15.524 "zerocopy_threshold": 0, 00:41:15.524 "tls_version": 0, 00:41:15.524 "enable_ktls": false 00:41:15.524 } 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "method": "sock_impl_set_options", 00:41:15.524 "params": { 00:41:15.524 "impl_name": "posix", 00:41:15.524 "recv_buf_size": 2097152, 00:41:15.524 "send_buf_size": 2097152, 00:41:15.524 "enable_recv_pipe": true, 00:41:15.524 "enable_quickack": false, 00:41:15.524 "enable_placement_id": 0, 00:41:15.524 "enable_zerocopy_send_server": true, 00:41:15.524 "enable_zerocopy_send_client": false, 00:41:15.524 "zerocopy_threshold": 0, 00:41:15.524 "tls_version": 0, 00:41:15.524 "enable_ktls": false 00:41:15.524 } 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "vmd", 00:41:15.524 "config": [] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "accel", 00:41:15.524 "config": [ 00:41:15.524 { 00:41:15.524 "method": "accel_set_options", 00:41:15.524 "params": { 00:41:15.524 "small_cache_size": 128, 00:41:15.524 "large_cache_size": 16, 00:41:15.524 "task_count": 2048, 00:41:15.524 "sequence_count": 2048, 00:41:15.524 "buf_count": 2048 00:41:15.524 } 00:41:15.524 } 00:41:15.524 ] 00:41:15.524 }, 00:41:15.524 { 00:41:15.524 "subsystem": "bdev", 00:41:15.524 "config": [ 00:41:15.524 { 00:41:15.524 "method": "bdev_set_options", 00:41:15.524 "params": { 00:41:15.524 "bdev_io_pool_size": 65535, 00:41:15.524 "bdev_io_cache_size": 256, 00:41:15.524 "bdev_auto_examine": true, 00:41:15.524 "iobuf_small_cache_size": 128, 00:41:15.524 "iobuf_large_cache_size": 16 00:41:15.524 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_raid_set_options", 00:41:15.525 "params": { 00:41:15.525 "process_window_size_kb": 1024 00:41:15.525 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_iscsi_set_options", 00:41:15.525 "params": { 00:41:15.525 "timeout_sec": 30 00:41:15.525 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_nvme_set_options", 00:41:15.525 "params": { 00:41:15.525 "action_on_timeout": "none", 00:41:15.525 "timeout_us": 0, 00:41:15.525 "timeout_admin_us": 0, 00:41:15.525 "keep_alive_timeout_ms": 10000, 00:41:15.525 "arbitration_burst": 0, 00:41:15.525 "low_priority_weight": 0, 00:41:15.525 "medium_priority_weight": 0, 00:41:15.525 "high_priority_weight": 0, 00:41:15.525 "nvme_adminq_poll_period_us": 10000, 00:41:15.525 "nvme_ioq_poll_period_us": 0, 00:41:15.525 "io_queue_requests": 512, 00:41:15.525 "delay_cmd_submit": true, 00:41:15.525 "transport_retry_count": 4, 00:41:15.525 "bdev_retry_count": 3, 00:41:15.525 "transport_ack_timeout": 0, 00:41:15.525 "ctrlr_loss_timeout_sec": 0, 00:41:15.525 "reconnect_delay_sec": 0, 00:41:15.525 "fast_io_fail_timeout_sec": 0, 00:41:15.525 "disable_auto_failback": false, 00:41:15.525 "generate_uuids": false, 00:41:15.525 "transport_tos": 0, 00:41:15.525 "nvme_error_stat": false, 00:41:15.525 "rdma_srq_size": 0, 00:41:15.525 "io_path_stat": false, 00:41:15.525 "allow_accel_sequence": false, 00:41:15.525 "rdma_max_cq_size": 0, 00:41:15.525 "rdma_cm_event_timeout_ms": 0, 00:41:15.525 "dhchap_digests": [ 00:41:15.525 "sha256", 00:41:15.525 "sha384", 00:41:15.525 "sha512" 00:41:15.525 ], 00:41:15.525 "dhchap_dhgroups": [ 00:41:15.525 "null", 00:41:15.525 "ffdhe2048", 00:41:15.525 "ffdhe3072", 00:41:15.525 "ffdhe4096", 00:41:15.525 "ffdhe6144", 00:41:15.525 "ffdhe8192" 00:41:15.525 ] 00:41:15.525 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_nvme_attach_controller", 00:41:15.525 "params": { 00:41:15.525 "name": "nvme0", 00:41:15.525 "trtype": "TCP", 00:41:15.525 "adrfam": "IPv4", 00:41:15.525 "traddr": "127.0.0.1", 00:41:15.525 "trsvcid": "4420", 00:41:15.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:15.525 "prchk_reftag": false, 00:41:15.525 "prchk_guard": false, 00:41:15.525 "ctrlr_loss_timeout_sec": 0, 00:41:15.525 "reconnect_delay_sec": 0, 00:41:15.525 "fast_io_fail_timeout_sec": 0, 00:41:15.525 "psk": "key0", 00:41:15.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:15.525 "hdgst": false, 00:41:15.525 "ddgst": false 00:41:15.525 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_nvme_set_hotplug", 00:41:15.525 "params": { 00:41:15.525 "period_us": 100000, 00:41:15.525 "enable": false 00:41:15.525 } 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "method": "bdev_wait_for_examine" 00:41:15.525 } 00:41:15.525 ] 00:41:15.525 }, 00:41:15.525 { 00:41:15.525 "subsystem": "nbd", 00:41:15.525 "config": [] 00:41:15.525 } 00:41:15.525 ] 00:41:15.525 }' 00:41:15.525 [2024-06-07 14:43:39.136753] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:41:15.525 [2024-06-07 14:43:39.136812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871341 ] 00:41:15.525 EAL: No free 2048 kB hugepages reported on node 1 00:41:15.784 [2024-06-07 14:43:39.213607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.784 [2024-06-07 14:43:39.241738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.784 [2024-06-07 14:43:39.377458] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:16.356 14:43:39 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:16.356 14:43:39 keyring_file -- common/autotest_common.sh@863 -- # return 0 00:41:16.356 14:43:39 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:41:16.356 14:43:39 keyring_file -- keyring/file.sh@120 -- # jq length 00:41:16.356 14:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.617 14:43:40 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:41:16.617 14:43:40 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:16.617 14:43:40 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:16.617 14:43:40 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:16.617 14:43:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:16.877 14:43:40 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:41:16.877 14:43:40 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:41:16.877 14:43:40 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:41:16.877 14:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:17.138 14:43:40 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:41:17.138 14:43:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:17.138 14:43:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.K93wfdOgo2 /tmp/tmp.kl2oflo8R1 00:41:17.138 14:43:40 keyring_file -- keyring/file.sh@20 -- # killprocess 871341 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 871341 ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@953 -- # kill -0 871341 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@954 -- # uname 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 871341 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 871341' 00:41:17.138 killing process with pid 871341 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@968 -- # kill 871341 00:41:17.138 Received shutdown signal, test time was about 1.000000 seconds 00:41:17.138 00:41:17.138 Latency(us) 00:41:17.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.138 =================================================================================================================== 00:41:17.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@973 -- # wait 871341 00:41:17.138 14:43:40 keyring_file -- keyring/file.sh@21 -- # killprocess 869526 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 869526 ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@953 -- # kill -0 869526 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@954 -- # uname 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 869526 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 869526' 00:41:17.138 killing process with pid 869526 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@968 -- # kill 869526 00:41:17.138 [2024-06-07 14:43:40.747893] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:41:17.138 14:43:40 keyring_file -- common/autotest_common.sh@973 -- # wait 869526 00:41:17.399 00:41:17.399 real 0m10.871s 00:41:17.399 user 0m25.936s 00:41:17.399 sys 0m2.548s 00:41:17.399 14:43:40 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:17.399 14:43:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:17.399 ************************************ 00:41:17.399 END TEST keyring_file 00:41:17.399 ************************************ 00:41:17.399 14:43:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:41:17.399 14:43:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:17.399 14:43:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:41:17.399 14:43:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:17.399 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:41:17.399 ************************************ 00:41:17.399 START TEST keyring_linux 00:41:17.399 ************************************ 00:41:17.399 14:43:41 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:41:17.661 * Looking for test storage... 00:41:17.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:17.661 14:43:41 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:17.661 14:43:41 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:17.661 14:43:41 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:17.661 14:43:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.661 14:43:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.661 14:43:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.661 14:43:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:17.661 14:43:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:17.661 /tmp/:spdk-test:key0 00:41:17.661 14:43:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:17.661 14:43:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:41:17.661 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:41:17.662 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:41:17.662 14:43:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:41:17.662 14:43:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:41:17.662 14:43:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:17.662 14:43:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:17.662 /tmp/:spdk-test:key1 00:41:17.662 14:43:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=871783 00:41:17.662 14:43:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 871783 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 871783 ']' 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:17.662 14:43:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:17.662 14:43:41 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:41:17.662 [2024-06-07 14:43:41.281778] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:41:17.662 [2024-06-07 14:43:41.281843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871783 ] 00:41:17.923 EAL: No free 2048 kB hugepages reported on node 1 00:41:17.923 [2024-06-07 14:43:41.351557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.923 [2024-06-07 14:43:41.391821] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.492 14:43:42 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:18.493 [2024-06-07 14:43:42.031938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:18.493 null0 00:41:18.493 [2024-06-07 14:43:42.063982] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:18.493 [2024-06-07 14:43:42.064496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:18.493 587029652 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:18.493 391065341 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=872098 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 872098 /var/tmp/bperf.sock 00:41:18.493 14:43:42 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 872098 ']' 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:18.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:18.493 14:43:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:18.753 [2024-06-07 14:43:42.145152] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 22.11.4 initialization... 00:41:18.753 [2024-06-07 14:43:42.145244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872098 ] 00:41:18.753 EAL: No free 2048 kB hugepages reported on node 1 00:41:18.753 [2024-06-07 14:43:42.224055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.753 [2024-06-07 14:43:42.252306] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.325 14:43:42 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:19.325 14:43:42 keyring_linux -- common/autotest_common.sh@863 -- # return 0 00:41:19.325 14:43:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:19.325 14:43:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:19.586 14:43:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:19.586 14:43:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:19.586 14:43:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:19.586 14:43:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:19.846 [2024-06-07 14:43:43.360851] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:19.846 nvme0n1 00:41:19.846 14:43:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:19.846 14:43:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:19.846 14:43:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:19.846 14:43:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:19.846 14:43:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:19.846 14:43:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.106 14:43:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:20.106 14:43:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:20.106 14:43:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:20.106 14:43:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:20.106 14:43:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:20.106 14:43:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.106 14:43:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@25 -- # sn=587029652 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 587029652 == \5\8\7\0\2\9\6\5\2 ]] 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 587029652 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:20.366 14:43:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:20.366 Running I/O for 1 seconds... 00:41:21.307 00:41:21.307 Latency(us) 00:41:21.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:21.307 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:21.307 nvme0n1 : 1.01 15076.40 58.89 0.00 0.00 8451.02 7427.41 15291.73 00:41:21.307 =================================================================================================================== 00:41:21.307 Total : 15076.40 58.89 0.00 0.00 8451.02 7427.41 15291.73 00:41:21.307 0 00:41:21.307 14:43:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:21.307 14:43:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:21.568 14:43:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:21.568 14:43:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:21.568 14:43:45 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:21.568 14:43:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:21.828 [2024-06-07 14:43:45.344016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:21.828 [2024-06-07 14:43:45.344775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275680 (107): Transport endpoint is not connected 00:41:21.828 [2024-06-07 14:43:45.345770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275680 (9): Bad file descriptor 00:41:21.828 [2024-06-07 14:43:45.346772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:41:21.828 [2024-06-07 14:43:45.346779] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:21.828 [2024-06-07 14:43:45.346785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:41:21.828 request: 00:41:21.828 { 00:41:21.828 "name": "nvme0", 00:41:21.828 "trtype": "tcp", 00:41:21.828 "traddr": "127.0.0.1", 00:41:21.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:21.828 "adrfam": "ipv4", 00:41:21.828 "trsvcid": "4420", 00:41:21.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:21.828 "psk": ":spdk-test:key1", 00:41:21.828 "method": "bdev_nvme_attach_controller", 00:41:21.828 "req_id": 1 00:41:21.828 } 00:41:21.828 Got JSON-RPC error response 00:41:21.828 response: 00:41:21.828 { 00:41:21.828 "code": -5, 00:41:21.828 "message": "Input/output error" 00:41:21.828 } 00:41:21.828 14:43:45 keyring_linux -- common/autotest_common.sh@652 -- # es=1 00:41:21.828 14:43:45 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:21.828 14:43:45 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:21.828 14:43:45 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@33 -- # sn=587029652 00:41:21.828 14:43:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 587029652 00:41:21.829 1 links removed 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@33 -- # sn=391065341 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 391065341 00:41:21.829 1 links removed 00:41:21.829 14:43:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 872098 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 872098 ']' 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 872098 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 872098 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 872098' 00:41:21.829 killing process with pid 872098 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@968 -- # kill 872098 00:41:21.829 Received shutdown signal, test time was about 1.000000 seconds 00:41:21.829 00:41:21.829 Latency(us) 00:41:21.829 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:21.829 =================================================================================================================== 00:41:21.829 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:21.829 14:43:45 keyring_linux -- common/autotest_common.sh@973 -- # wait 872098 00:41:22.089 14:43:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 871783 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 871783 ']' 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 871783 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@954 -- # uname 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 871783 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 871783' 00:41:22.089 killing process with pid 871783 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@968 -- # kill 871783 00:41:22.089 14:43:45 keyring_linux -- common/autotest_common.sh@973 -- # wait 871783 00:41:22.349 00:41:22.349 real 0m4.751s 00:41:22.349 user 0m8.574s 00:41:22.349 sys 0m1.379s 00:41:22.349 14:43:45 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:22.349 14:43:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:22.349 ************************************ 00:41:22.349 END TEST keyring_linux 00:41:22.349 ************************************ 00:41:22.349 14:43:45 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:41:22.349 14:43:45 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:41:22.349 14:43:45 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:41:22.349 14:43:45 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:41:22.349 14:43:45 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:41:22.349 14:43:45 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:41:22.349 14:43:45 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:41:22.349 14:43:45 -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:22.349 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:41:22.349 14:43:45 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:41:22.349 14:43:45 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:41:22.349 14:43:45 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:41:22.349 14:43:45 -- common/autotest_common.sh@10 -- # set +x 00:41:30.490 INFO: APP EXITING 00:41:30.490 INFO: killing all VMs 00:41:30.490 INFO: killing vhost app 00:41:30.490 INFO: EXIT DONE 00:41:33.791 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:65:00.0 (144d a80a): Already using the nvme driver 00:41:33.791 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:41:33.791 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:38.001 Cleaning 00:41:38.001 Removing: /var/run/dpdk/spdk0/config 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:38.001 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:38.001 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:38.001 Removing: /var/run/dpdk/spdk1/config 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:38.001 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:38.001 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:38.001 Removing: /var/run/dpdk/spdk1/mp_socket 00:41:38.001 Removing: /var/run/dpdk/spdk2/config 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:38.001 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:38.001 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:38.001 Removing: /var/run/dpdk/spdk3/config 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:38.001 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:38.001 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:38.001 Removing: /var/run/dpdk/spdk4/config 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:38.001 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:38.001 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:38.001 Removing: /dev/shm/bdev_svc_trace.1 00:41:38.001 Removing: /dev/shm/nvmf_trace.0 00:41:38.001 Removing: /dev/shm/spdk_tgt_trace.pid287622 00:41:38.001 Removing: /var/run/dpdk/spdk0 00:41:38.001 Removing: /var/run/dpdk/spdk1 00:41:38.001 Removing: /var/run/dpdk/spdk2 00:41:38.001 Removing: /var/run/dpdk/spdk3 00:41:38.001 Removing: /var/run/dpdk/spdk4 00:41:38.001 Removing: /var/run/dpdk/spdk_pid286016 00:41:38.001 Removing: /var/run/dpdk/spdk_pid287622 00:41:38.001 Removing: /var/run/dpdk/spdk_pid288142 00:41:38.001 Removing: /var/run/dpdk/spdk_pid289195 00:41:38.001 Removing: /var/run/dpdk/spdk_pid289516 00:41:38.001 Removing: /var/run/dpdk/spdk_pid290583 00:41:38.001 Removing: /var/run/dpdk/spdk_pid290920 00:41:38.001 Removing: /var/run/dpdk/spdk_pid291041 00:41:38.001 Removing: /var/run/dpdk/spdk_pid292166 00:41:38.001 Removing: /var/run/dpdk/spdk_pid292711 00:41:38.001 Removing: /var/run/dpdk/spdk_pid293018 00:41:38.001 Removing: /var/run/dpdk/spdk_pid293399 00:41:38.001 Removing: /var/run/dpdk/spdk_pid293806 00:41:38.001 Removing: /var/run/dpdk/spdk_pid294192 00:41:38.001 Removing: /var/run/dpdk/spdk_pid294453 00:41:38.001 Removing: /var/run/dpdk/spdk_pid294593 00:41:38.001 Removing: /var/run/dpdk/spdk_pid294963 00:41:38.001 Removing: /var/run/dpdk/spdk_pid296352 00:41:38.001 Removing: /var/run/dpdk/spdk_pid299618 00:41:38.001 Removing: /var/run/dpdk/spdk_pid299748 00:41:38.001 Removing: /var/run/dpdk/spdk_pid300059 00:41:38.001 Removing: /var/run/dpdk/spdk_pid300352 00:41:38.001 Removing: /var/run/dpdk/spdk_pid300732 00:41:38.001 Removing: /var/run/dpdk/spdk_pid301023 00:41:38.001 Removing: /var/run/dpdk/spdk_pid301438 00:41:38.001 Removing: /var/run/dpdk/spdk_pid301467 00:41:38.001 Removing: /var/run/dpdk/spdk_pid301817 00:41:38.001 Removing: /var/run/dpdk/spdk_pid302075 00:41:38.001 Removing: /var/run/dpdk/spdk_pid302191 00:41:38.001 Removing: /var/run/dpdk/spdk_pid302511 00:41:38.001 Removing: /var/run/dpdk/spdk_pid302962 00:41:38.001 Removing: /var/run/dpdk/spdk_pid303213 00:41:38.001 Removing: /var/run/dpdk/spdk_pid303445 00:41:38.001 Removing: /var/run/dpdk/spdk_pid303759 00:41:38.001 Removing: /var/run/dpdk/spdk_pid303795 00:41:38.001 Removing: /var/run/dpdk/spdk_pid304079 00:41:38.001 Removing: /var/run/dpdk/spdk_pid304237 00:41:38.001 Removing: /var/run/dpdk/spdk_pid304609 00:41:38.001 Removing: /var/run/dpdk/spdk_pid305008 00:41:38.001 Removing: /var/run/dpdk/spdk_pid305361 00:41:38.001 Removing: /var/run/dpdk/spdk_pid305539 00:41:38.001 Removing: /var/run/dpdk/spdk_pid305740 00:41:38.001 Removing: /var/run/dpdk/spdk_pid306095 00:41:38.001 Removing: /var/run/dpdk/spdk_pid306634 00:41:38.001 Removing: /var/run/dpdk/spdk_pid307086 00:41:38.001 Removing: /var/run/dpdk/spdk_pid307341 00:41:38.001 Removing: /var/run/dpdk/spdk_pid307627 00:41:38.001 Removing: /var/run/dpdk/spdk_pid307978 00:41:38.001 Removing: /var/run/dpdk/spdk_pid308326 00:41:38.001 Removing: /var/run/dpdk/spdk_pid308522 00:41:38.001 Removing: /var/run/dpdk/spdk_pid308715 00:41:38.001 Removing: /var/run/dpdk/spdk_pid309062 00:41:38.001 Removing: /var/run/dpdk/spdk_pid309423 00:41:38.001 Removing: /var/run/dpdk/spdk_pid309736 00:41:38.001 Removing: /var/run/dpdk/spdk_pid309893 00:41:38.001 Removing: /var/run/dpdk/spdk_pid310159 00:41:38.001 Removing: /var/run/dpdk/spdk_pid310399 00:41:38.001 Removing: /var/run/dpdk/spdk_pid310712 00:41:38.001 Removing: /var/run/dpdk/spdk_pid315768 00:41:38.001 Removing: /var/run/dpdk/spdk_pid416223 00:41:38.001 Removing: /var/run/dpdk/spdk_pid421664 00:41:38.001 Removing: /var/run/dpdk/spdk_pid434017 00:41:38.001 Removing: /var/run/dpdk/spdk_pid440751 00:41:38.001 Removing: /var/run/dpdk/spdk_pid446124 00:41:38.001 Removing: /var/run/dpdk/spdk_pid446795 00:41:38.001 Removing: /var/run/dpdk/spdk_pid462076 00:41:38.001 Removing: /var/run/dpdk/spdk_pid462120 00:41:38.001 Removing: /var/run/dpdk/spdk_pid463135 00:41:38.001 Removing: /var/run/dpdk/spdk_pid464155 00:41:38.001 Removing: /var/run/dpdk/spdk_pid465185 00:41:38.002 Removing: /var/run/dpdk/spdk_pid465807 00:41:38.002 Removing: /var/run/dpdk/spdk_pid465957 00:41:38.002 Removing: /var/run/dpdk/spdk_pid466175 00:41:38.002 Removing: /var/run/dpdk/spdk_pid466428 00:41:38.002 Removing: /var/run/dpdk/spdk_pid466430 00:41:38.002 Removing: /var/run/dpdk/spdk_pid467435 00:41:38.002 Removing: /var/run/dpdk/spdk_pid468435 00:41:38.002 Removing: /var/run/dpdk/spdk_pid469449 00:41:38.002 Removing: /var/run/dpdk/spdk_pid470118 00:41:38.002 Removing: /var/run/dpdk/spdk_pid470120 00:41:38.002 Removing: /var/run/dpdk/spdk_pid470461 00:41:38.002 Removing: /var/run/dpdk/spdk_pid471886 00:41:38.002 Removing: /var/run/dpdk/spdk_pid473156 00:41:38.002 Removing: /var/run/dpdk/spdk_pid483664 00:41:38.002 Removing: /var/run/dpdk/spdk_pid484081 00:41:38.002 Removing: /var/run/dpdk/spdk_pid489708 00:41:38.002 Removing: /var/run/dpdk/spdk_pid496788 00:41:38.002 Removing: /var/run/dpdk/spdk_pid500159 00:41:38.002 Removing: /var/run/dpdk/spdk_pid513603 00:41:38.002 Removing: /var/run/dpdk/spdk_pid525339 00:41:38.002 Removing: /var/run/dpdk/spdk_pid527337 00:41:38.002 Removing: /var/run/dpdk/spdk_pid528490 00:41:38.002 Removing: /var/run/dpdk/spdk_pid550322 00:41:38.002 Removing: /var/run/dpdk/spdk_pid555367 00:41:38.002 Removing: /var/run/dpdk/spdk_pid587012 00:41:38.002 Removing: /var/run/dpdk/spdk_pid592754 00:41:38.002 Removing: /var/run/dpdk/spdk_pid594616 00:41:38.002 Removing: /var/run/dpdk/spdk_pid596744 00:41:38.002 Removing: /var/run/dpdk/spdk_pid596773 00:41:38.002 Removing: /var/run/dpdk/spdk_pid596788 00:41:38.002 Removing: /var/run/dpdk/spdk_pid597024 00:41:38.002 Removing: /var/run/dpdk/spdk_pid597508 00:41:38.263 Removing: /var/run/dpdk/spdk_pid599972 00:41:38.263 Removing: /var/run/dpdk/spdk_pid601144 00:41:38.263 Removing: /var/run/dpdk/spdk_pid601522 00:41:38.263 Removing: /var/run/dpdk/spdk_pid604047 00:41:38.263 Removing: /var/run/dpdk/spdk_pid604778 00:41:38.263 Removing: /var/run/dpdk/spdk_pid605642 00:41:38.263 Removing: /var/run/dpdk/spdk_pid611045 00:41:38.263 Removing: /var/run/dpdk/spdk_pid618074 00:41:38.263 Removing: /var/run/dpdk/spdk_pid623877 00:41:38.263 Removing: /var/run/dpdk/spdk_pid670716 00:41:38.263 Removing: /var/run/dpdk/spdk_pid675268 00:41:38.263 Removing: /var/run/dpdk/spdk_pid682763 00:41:38.263 Removing: /var/run/dpdk/spdk_pid684256 00:41:38.263 Removing: /var/run/dpdk/spdk_pid685974 00:41:38.263 Removing: /var/run/dpdk/spdk_pid691799 00:41:38.263 Removing: /var/run/dpdk/spdk_pid697504 00:41:38.263 Removing: /var/run/dpdk/spdk_pid707596 00:41:38.263 Removing: /var/run/dpdk/spdk_pid707604 00:41:38.263 Removing: /var/run/dpdk/spdk_pid713083 00:41:38.263 Removing: /var/run/dpdk/spdk_pid713339 00:41:38.263 Removing: /var/run/dpdk/spdk_pid713665 00:41:38.263 Removing: /var/run/dpdk/spdk_pid714088 00:41:38.263 Removing: /var/run/dpdk/spdk_pid714268 00:41:38.263 Removing: /var/run/dpdk/spdk_pid715460 00:41:38.263 Removing: /var/run/dpdk/spdk_pid717387 00:41:38.263 Removing: /var/run/dpdk/spdk_pid719369 00:41:38.263 Removing: /var/run/dpdk/spdk_pid721368 00:41:38.263 Removing: /var/run/dpdk/spdk_pid723321 00:41:38.263 Removing: /var/run/dpdk/spdk_pid725208 00:41:38.263 Removing: /var/run/dpdk/spdk_pid732771 00:41:38.263 Removing: /var/run/dpdk/spdk_pid733592 00:41:38.263 Removing: /var/run/dpdk/spdk_pid734740 00:41:38.263 Removing: /var/run/dpdk/spdk_pid735964 00:41:38.263 Removing: /var/run/dpdk/spdk_pid742598 00:41:38.263 Removing: /var/run/dpdk/spdk_pid746112 00:41:38.263 Removing: /var/run/dpdk/spdk_pid752994 00:41:38.263 Removing: /var/run/dpdk/spdk_pid759917 00:41:38.263 Removing: /var/run/dpdk/spdk_pid769954 00:41:38.263 Removing: /var/run/dpdk/spdk_pid779092 00:41:38.263 Removing: /var/run/dpdk/spdk_pid779110 00:41:38.263 Removing: /var/run/dpdk/spdk_pid803113 00:41:38.263 Removing: /var/run/dpdk/spdk_pid803877 00:41:38.263 Removing: /var/run/dpdk/spdk_pid804643 00:41:38.263 Removing: /var/run/dpdk/spdk_pid805332 00:41:38.263 Removing: /var/run/dpdk/spdk_pid806308 00:41:38.263 Removing: /var/run/dpdk/spdk_pid807059 00:41:38.263 Removing: /var/run/dpdk/spdk_pid807756 00:41:38.263 Removing: /var/run/dpdk/spdk_pid808443 00:41:38.263 Removing: /var/run/dpdk/spdk_pid813846 00:41:38.263 Removing: /var/run/dpdk/spdk_pid814174 00:41:38.263 Removing: /var/run/dpdk/spdk_pid821739 00:41:38.263 Removing: /var/run/dpdk/spdk_pid821943 00:41:38.263 Removing: /var/run/dpdk/spdk_pid824514 00:41:38.263 Removing: /var/run/dpdk/spdk_pid832233 00:41:38.263 Removing: /var/run/dpdk/spdk_pid832245 00:41:38.263 Removing: /var/run/dpdk/spdk_pid838590 00:41:38.263 Removing: /var/run/dpdk/spdk_pid841004 00:41:38.263 Removing: /var/run/dpdk/spdk_pid843301 00:41:38.263 Removing: /var/run/dpdk/spdk_pid844569 00:41:38.263 Removing: /var/run/dpdk/spdk_pid847566 00:41:38.525 Removing: /var/run/dpdk/spdk_pid848770 00:41:38.525 Removing: /var/run/dpdk/spdk_pid859598 00:41:38.525 Removing: /var/run/dpdk/spdk_pid860108 00:41:38.525 Removing: /var/run/dpdk/spdk_pid860667 00:41:38.525 Removing: /var/run/dpdk/spdk_pid863699 00:41:38.525 Removing: /var/run/dpdk/spdk_pid864372 00:41:38.525 Removing: /var/run/dpdk/spdk_pid864814 00:41:38.525 Removing: /var/run/dpdk/spdk_pid869526 00:41:38.525 Removing: /var/run/dpdk/spdk_pid869839 00:41:38.525 Removing: /var/run/dpdk/spdk_pid871341 00:41:38.525 Removing: /var/run/dpdk/spdk_pid871783 00:41:38.525 Removing: /var/run/dpdk/spdk_pid872098 00:41:38.525 Clean 00:41:38.525 14:44:02 -- common/autotest_common.sh@1450 -- # return 0 00:41:38.525 14:44:02 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:41:38.525 14:44:02 -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:38.525 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:41:38.525 14:44:02 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:41:38.525 14:44:02 -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:38.525 14:44:02 -- common/autotest_common.sh@10 -- # set +x 00:41:38.525 14:44:02 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:38.525 14:44:02 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:38.525 14:44:02 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:38.525 14:44:02 -- spdk/autotest.sh@391 -- # hash lcov 00:41:38.525 14:44:02 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:41:38.525 14:44:02 -- spdk/autotest.sh@393 -- # hostname 00:41:38.525 14:44:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:38.785 geninfo: WARNING: invalid characters removed from testname! 00:42:05.364 14:44:26 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:05.625 14:44:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:07.538 14:44:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:08.923 14:44:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:10.841 14:44:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:12.285 14:44:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:42:13.670 14:44:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:13.670 14:44:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:13.670 14:44:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:13.670 14:44:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:13.670 14:44:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:13.670 14:44:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.670 14:44:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.670 14:44:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.670 14:44:37 -- paths/export.sh@5 -- $ export PATH 00:42:13.670 14:44:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:13.670 14:44:37 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:42:13.670 14:44:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:42:13.670 14:44:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717764277.XXXXXX 00:42:13.670 14:44:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717764277.oEnIg9 00:42:13.670 14:44:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:42:13.670 14:44:37 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:42:13.670 14:44:37 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:42:13.670 14:44:37 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:42:13.670 14:44:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:42:13.670 14:44:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:42:13.670 14:44:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:42:13.670 14:44:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:42:13.670 14:44:37 -- common/autotest_common.sh@10 -- $ set +x 00:42:13.670 14:44:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:42:13.670 14:44:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:42:13.670 14:44:37 -- pm/common@17 -- $ local monitor 00:42:13.670 14:44:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:13.670 14:44:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:13.670 14:44:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:13.670 14:44:37 -- pm/common@21 -- $ date +%s 00:42:13.670 14:44:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:13.670 14:44:37 -- pm/common@25 -- $ sleep 1 00:42:13.670 14:44:37 -- pm/common@21 -- $ date +%s 00:42:13.670 14:44:37 -- pm/common@21 -- $ date +%s 00:42:13.670 14:44:37 -- pm/common@21 -- $ date +%s 00:42:13.670 14:44:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717764277 00:42:13.670 14:44:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717764277 00:42:13.670 14:44:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717764277 00:42:13.670 14:44:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1717764277 00:42:13.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717764277_collect-vmstat.pm.log 00:42:13.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717764277_collect-cpu-load.pm.log 00:42:13.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717764277_collect-cpu-temp.pm.log 00:42:13.932 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1717764277_collect-bmc-pm.bmc.pm.log 00:42:14.871 14:44:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:42:14.871 14:44:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:42:14.871 14:44:38 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:14.871 14:44:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:42:14.871 14:44:38 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:42:14.871 14:44:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:42:14.871 14:44:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:42:14.871 14:44:38 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:14.871 14:44:38 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:42:14.871 14:44:38 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:14.871 14:44:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:42:14.871 14:44:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:14.871 14:44:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:14.871 14:44:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:14.871 14:44:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:14.871 14:44:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:42:14.871 14:44:38 -- pm/common@44 -- $ pid=885760 00:42:14.871 14:44:38 -- pm/common@50 -- $ kill -TERM 885760 00:42:14.871 14:44:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:14.871 14:44:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:42:14.871 14:44:38 -- pm/common@44 -- $ pid=885761 00:42:14.871 14:44:38 -- pm/common@50 -- $ kill -TERM 885761 00:42:14.871 14:44:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:14.871 14:44:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:42:14.871 14:44:38 -- pm/common@44 -- $ pid=885763 00:42:14.871 14:44:38 -- pm/common@50 -- $ kill -TERM 885763 00:42:14.871 14:44:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:14.871 14:44:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:42:14.871 14:44:38 -- pm/common@44 -- $ pid=885786 00:42:14.871 14:44:38 -- pm/common@50 -- $ sudo -E kill -TERM 885786 00:42:14.871 + [[ -n 149387 ]] 00:42:14.871 + sudo kill 149387 00:42:14.881 [Pipeline] } 00:42:14.899 [Pipeline] // stage 00:42:14.904 [Pipeline] } 00:42:14.920 [Pipeline] // timeout 00:42:14.925 [Pipeline] } 00:42:14.942 [Pipeline] // catchError 00:42:14.948 [Pipeline] } 00:42:14.969 [Pipeline] // wrap 00:42:14.975 [Pipeline] } 00:42:14.990 [Pipeline] // catchError 00:42:14.998 [Pipeline] stage 00:42:15.000 [Pipeline] { (Epilogue) 00:42:15.014 [Pipeline] catchError 00:42:15.016 [Pipeline] { 00:42:15.028 [Pipeline] echo 00:42:15.030 Cleanup processes 00:42:15.035 [Pipeline] sh 00:42:15.323 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:15.323 885864 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:42:15.323 886308 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:15.339 [Pipeline] sh 00:42:15.626 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:42:15.627 ++ grep -v 'sudo pgrep' 00:42:15.627 ++ awk '{print $1}' 00:42:15.627 + sudo kill -9 885864 00:42:15.639 [Pipeline] sh 00:42:15.925 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:28.162 [Pipeline] sh 00:42:28.449 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:28.450 Artifacts sizes are good 00:42:28.463 [Pipeline] archiveArtifacts 00:42:28.469 Archiving artifacts 00:42:28.742 [Pipeline] sh 00:42:29.033 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:42:29.047 [Pipeline] cleanWs 00:42:29.056 [WS-CLEANUP] Deleting project workspace... 00:42:29.056 [WS-CLEANUP] Deferred wipeout is used... 00:42:29.062 [WS-CLEANUP] done 00:42:29.064 [Pipeline] } 00:42:29.083 [Pipeline] // catchError 00:42:29.095 [Pipeline] sh 00:42:29.380 + logger -p user.info -t JENKINS-CI 00:42:29.389 [Pipeline] } 00:42:29.405 [Pipeline] // stage 00:42:29.410 [Pipeline] } 00:42:29.428 [Pipeline] // node 00:42:29.433 [Pipeline] End of Pipeline 00:42:29.475 Finished: SUCCESS